Test Report: KVM_Linux_crio 17975

                    
                      e1aa8054ab618099e4da6d3513143595f1f851ba:2024-01-17:32738
                    
                

Test fail (23/312)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.72
53 TestAddons/StoppedEnableDisable 155.6
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 181
224 TestMultiNode/serial/RestartKeepsNodes 690.12
226 TestMultiNode/serial/StopMultiNode 143.05
233 TestPreload 291.27
338 TestStartStop/group/old-k8s-version/serial/Stop 140.05
341 TestStartStop/group/no-preload/serial/Stop 140.23
344 TestStartStop/group/embed-certs/serial/Stop 140.01
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.17
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.21
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.2
358 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.15
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.16
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 543.21
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 411.04
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 143.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 125.65
365 TestStartStop/group/old-k8s-version/serial/Pause 5.48
x
+
TestAddons/parallel/Ingress (154.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-033244 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-033244 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-033244 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [161da6d6-00f5-4bed-85ce-e0fe7e9ef47e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [161da6d6-00f5-4bed-85ce-e0fe7e9ef47e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004304605s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-033244 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.904173198s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-033244 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.234
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-033244 addons disable ingress-dns --alsologtostderr -v=1: (1.632735113s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-033244 addons disable ingress --alsologtostderr -v=1: (7.932708358s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-033244 -n addons-033244
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-033244 logs -n 25: (1.320611835s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-106740                                                                     | download-only-106740 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:37 UTC |
	| delete  | -p download-only-892925                                                                     | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:37 UTC |
	| delete  | -p download-only-404581                                                                     | download-only-404581 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:37 UTC |
	| delete  | -p download-only-106740                                                                     | download-only-106740 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-992131 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC |                     |
	|         | binary-mirror-992131                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46169                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-992131                                                                     | binary-mirror-992131 | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:37 UTC |
	| addons  | enable dashboard -p                                                                         | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC |                     |
	|         | addons-033244                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC |                     |
	|         | addons-033244                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-033244 --wait=true                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:37 UTC | 16 Jan 24 22:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-033244 addons                                                                        | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:40 UTC | 16 Jan 24 22:40 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-033244 addons disable                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:40 UTC | 16 Jan 24 22:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | -p addons-033244                                                                            |                      |         |         |                     |                     |
	| ip      | addons-033244 ip                                                                            | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	| addons  | addons-033244 addons disable                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | addons-033244                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | -p addons-033244                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | addons-033244                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-033244 ssh cat                                                                       | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | /opt/local-path-provisioner/pvc-65aa8f6a-073e-4e60-ba0a-da47faceff6d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-033244 addons disable                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-033244 ssh curl -s                                                                   | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-033244 addons                                                                        | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-033244 addons                                                                        | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:41 UTC | 16 Jan 24 22:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-033244 ip                                                                            | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:43 UTC | 16 Jan 24 22:43 UTC |
	| addons  | addons-033244 addons disable                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:43 UTC | 16 Jan 24 22:43 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-033244 addons disable                                                                | addons-033244        | jenkins | v1.32.0 | 16 Jan 24 22:43 UTC | 16 Jan 24 22:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 22:37:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 22:37:18.112652   15752 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:37:18.112771   15752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:37:18.112779   15752 out.go:309] Setting ErrFile to fd 2...
	I0116 22:37:18.112784   15752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:37:18.113007   15752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:37:18.113603   15752 out.go:303] Setting JSON to false
	I0116 22:37:18.114357   15752 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1184,"bootTime":1705443454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:37:18.114428   15752 start.go:138] virtualization: kvm guest
	I0116 22:37:18.116570   15752 out.go:177] * [addons-033244] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:37:18.118358   15752 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 22:37:18.118312   15752 notify.go:220] Checking for updates...
	I0116 22:37:18.119997   15752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:37:18.121157   15752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:37:18.122291   15752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:37:18.123594   15752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 22:37:18.124848   15752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 22:37:18.126083   15752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:37:18.156258   15752 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 22:37:18.157488   15752 start.go:298] selected driver: kvm2
	I0116 22:37:18.157507   15752 start.go:902] validating driver "kvm2" against <nil>
	I0116 22:37:18.157517   15752 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 22:37:18.158266   15752 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:37:18.158362   15752 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 22:37:18.172242   15752 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 22:37:18.172299   15752 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 22:37:18.172497   15752 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 22:37:18.172557   15752 cni.go:84] Creating CNI manager for ""
	I0116 22:37:18.172569   15752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:37:18.172580   15752 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 22:37:18.172588   15752 start_flags.go:321] config:
	{Name:addons-033244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-033244 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:37:18.172710   15752 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:37:18.174523   15752 out.go:177] * Starting control plane node addons-033244 in cluster addons-033244
	I0116 22:37:18.175777   15752 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 22:37:18.175799   15752 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 22:37:18.175806   15752 cache.go:56] Caching tarball of preloaded images
	I0116 22:37:18.175865   15752 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 22:37:18.175875   15752 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 22:37:18.176157   15752 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/config.json ...
	I0116 22:37:18.176176   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/config.json: {Name:mk43df8ec61b8a959c8a1a69e0a0ef7807339e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:18.176300   15752 start.go:365] acquiring machines lock for addons-033244: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 22:37:18.176342   15752 start.go:369] acquired machines lock for "addons-033244" in 29.539µs
	I0116 22:37:18.176359   15752 start.go:93] Provisioning new machine with config: &{Name:addons-033244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-033244 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 22:37:18.176412   15752 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 22:37:18.178076   15752 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 22:37:18.178191   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:37:18.178224   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:37:18.191512   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0116 22:37:18.191994   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:37:18.192518   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:37:18.192542   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:37:18.192890   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:37:18.193057   15752 main.go:141] libmachine: (addons-033244) Calling .GetMachineName
	I0116 22:37:18.193205   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:18.193343   15752 start.go:159] libmachine.API.Create for "addons-033244" (driver="kvm2")
	I0116 22:37:18.193369   15752 client.go:168] LocalClient.Create starting
	I0116 22:37:18.193423   15752 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem
	I0116 22:37:18.332261   15752 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem
	I0116 22:37:18.431753   15752 main.go:141] libmachine: Running pre-create checks...
	I0116 22:37:18.431776   15752 main.go:141] libmachine: (addons-033244) Calling .PreCreateCheck
	I0116 22:37:18.432266   15752 main.go:141] libmachine: (addons-033244) Calling .GetConfigRaw
	I0116 22:37:18.432720   15752 main.go:141] libmachine: Creating machine...
	I0116 22:37:18.432735   15752 main.go:141] libmachine: (addons-033244) Calling .Create
	I0116 22:37:18.432877   15752 main.go:141] libmachine: (addons-033244) Creating KVM machine...
	I0116 22:37:18.434243   15752 main.go:141] libmachine: (addons-033244) DBG | found existing default KVM network
	I0116 22:37:18.434977   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:18.434839   15774 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I0116 22:37:18.440491   15752 main.go:141] libmachine: (addons-033244) DBG | trying to create private KVM network mk-addons-033244 192.168.39.0/24...
	I0116 22:37:18.506576   15752 main.go:141] libmachine: (addons-033244) DBG | private KVM network mk-addons-033244 192.168.39.0/24 created
	I0116 22:37:18.506610   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:18.506530   15774 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:37:18.506635   15752 main.go:141] libmachine: (addons-033244) Setting up store path in /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244 ...
	I0116 22:37:18.506660   15752 main.go:141] libmachine: (addons-033244) Building disk image from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 22:37:18.506682   15752 main.go:141] libmachine: (addons-033244) Downloading /home/jenkins/minikube-integration/17975-6238/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 22:37:18.725280   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:18.725120   15774 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa...
	I0116 22:37:19.039325   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:19.039203   15774 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/addons-033244.rawdisk...
	I0116 22:37:19.039369   15752 main.go:141] libmachine: (addons-033244) DBG | Writing magic tar header
	I0116 22:37:19.039385   15752 main.go:141] libmachine: (addons-033244) DBG | Writing SSH key tar header
	I0116 22:37:19.039397   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:19.039335   15774 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244 ...
	I0116 22:37:19.039471   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244
	I0116 22:37:19.039499   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244 (perms=drwx------)
	I0116 22:37:19.039515   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines
	I0116 22:37:19.039529   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:37:19.039536   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238
	I0116 22:37:19.039548   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 22:37:19.039562   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home/jenkins
	I0116 22:37:19.039578   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines (perms=drwxr-xr-x)
	I0116 22:37:19.039593   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube (perms=drwxr-xr-x)
	I0116 22:37:19.039600   15752 main.go:141] libmachine: (addons-033244) DBG | Checking permissions on dir: /home
	I0116 22:37:19.039608   15752 main.go:141] libmachine: (addons-033244) DBG | Skipping /home - not owner
	I0116 22:37:19.039617   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238 (perms=drwxrwxr-x)
	I0116 22:37:19.039654   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 22:37:19.039675   15752 main.go:141] libmachine: (addons-033244) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 22:37:19.039686   15752 main.go:141] libmachine: (addons-033244) Creating domain...
	I0116 22:37:19.040593   15752 main.go:141] libmachine: (addons-033244) define libvirt domain using xml: 
	I0116 22:37:19.040619   15752 main.go:141] libmachine: (addons-033244) <domain type='kvm'>
	I0116 22:37:19.040629   15752 main.go:141] libmachine: (addons-033244)   <name>addons-033244</name>
	I0116 22:37:19.040639   15752 main.go:141] libmachine: (addons-033244)   <memory unit='MiB'>4000</memory>
	I0116 22:37:19.040653   15752 main.go:141] libmachine: (addons-033244)   <vcpu>2</vcpu>
	I0116 22:37:19.040665   15752 main.go:141] libmachine: (addons-033244)   <features>
	I0116 22:37:19.040676   15752 main.go:141] libmachine: (addons-033244)     <acpi/>
	I0116 22:37:19.040688   15752 main.go:141] libmachine: (addons-033244)     <apic/>
	I0116 22:37:19.040702   15752 main.go:141] libmachine: (addons-033244)     <pae/>
	I0116 22:37:19.040720   15752 main.go:141] libmachine: (addons-033244)     
	I0116 22:37:19.040739   15752 main.go:141] libmachine: (addons-033244)   </features>
	I0116 22:37:19.040755   15752 main.go:141] libmachine: (addons-033244)   <cpu mode='host-passthrough'>
	I0116 22:37:19.040766   15752 main.go:141] libmachine: (addons-033244)   
	I0116 22:37:19.040780   15752 main.go:141] libmachine: (addons-033244)   </cpu>
	I0116 22:37:19.040793   15752 main.go:141] libmachine: (addons-033244)   <os>
	I0116 22:37:19.040824   15752 main.go:141] libmachine: (addons-033244)     <type>hvm</type>
	I0116 22:37:19.040858   15752 main.go:141] libmachine: (addons-033244)     <boot dev='cdrom'/>
	I0116 22:37:19.040875   15752 main.go:141] libmachine: (addons-033244)     <boot dev='hd'/>
	I0116 22:37:19.040890   15752 main.go:141] libmachine: (addons-033244)     <bootmenu enable='no'/>
	I0116 22:37:19.040915   15752 main.go:141] libmachine: (addons-033244)   </os>
	I0116 22:37:19.040933   15752 main.go:141] libmachine: (addons-033244)   <devices>
	I0116 22:37:19.040948   15752 main.go:141] libmachine: (addons-033244)     <disk type='file' device='cdrom'>
	I0116 22:37:19.040965   15752 main.go:141] libmachine: (addons-033244)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/boot2docker.iso'/>
	I0116 22:37:19.040981   15752 main.go:141] libmachine: (addons-033244)       <target dev='hdc' bus='scsi'/>
	I0116 22:37:19.040994   15752 main.go:141] libmachine: (addons-033244)       <readonly/>
	I0116 22:37:19.041011   15752 main.go:141] libmachine: (addons-033244)     </disk>
	I0116 22:37:19.041030   15752 main.go:141] libmachine: (addons-033244)     <disk type='file' device='disk'>
	I0116 22:37:19.041048   15752 main.go:141] libmachine: (addons-033244)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 22:37:19.041065   15752 main.go:141] libmachine: (addons-033244)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/addons-033244.rawdisk'/>
	I0116 22:37:19.041083   15752 main.go:141] libmachine: (addons-033244)       <target dev='hda' bus='virtio'/>
	I0116 22:37:19.041093   15752 main.go:141] libmachine: (addons-033244)     </disk>
	I0116 22:37:19.041105   15752 main.go:141] libmachine: (addons-033244)     <interface type='network'>
	I0116 22:37:19.041114   15752 main.go:141] libmachine: (addons-033244)       <source network='mk-addons-033244'/>
	I0116 22:37:19.041121   15752 main.go:141] libmachine: (addons-033244)       <model type='virtio'/>
	I0116 22:37:19.041127   15752 main.go:141] libmachine: (addons-033244)     </interface>
	I0116 22:37:19.041133   15752 main.go:141] libmachine: (addons-033244)     <interface type='network'>
	I0116 22:37:19.041140   15752 main.go:141] libmachine: (addons-033244)       <source network='default'/>
	I0116 22:37:19.041146   15752 main.go:141] libmachine: (addons-033244)       <model type='virtio'/>
	I0116 22:37:19.041154   15752 main.go:141] libmachine: (addons-033244)     </interface>
	I0116 22:37:19.041160   15752 main.go:141] libmachine: (addons-033244)     <serial type='pty'>
	I0116 22:37:19.041168   15752 main.go:141] libmachine: (addons-033244)       <target port='0'/>
	I0116 22:37:19.041174   15752 main.go:141] libmachine: (addons-033244)     </serial>
	I0116 22:37:19.041181   15752 main.go:141] libmachine: (addons-033244)     <console type='pty'>
	I0116 22:37:19.041196   15752 main.go:141] libmachine: (addons-033244)       <target type='serial' port='0'/>
	I0116 22:37:19.041210   15752 main.go:141] libmachine: (addons-033244)     </console>
	I0116 22:37:19.041226   15752 main.go:141] libmachine: (addons-033244)     <rng model='virtio'>
	I0116 22:37:19.041244   15752 main.go:141] libmachine: (addons-033244)       <backend model='random'>/dev/random</backend>
	I0116 22:37:19.041256   15752 main.go:141] libmachine: (addons-033244)     </rng>
	I0116 22:37:19.041267   15752 main.go:141] libmachine: (addons-033244)     
	I0116 22:37:19.041282   15752 main.go:141] libmachine: (addons-033244)     
	I0116 22:37:19.041294   15752 main.go:141] libmachine: (addons-033244)   </devices>
	I0116 22:37:19.041305   15752 main.go:141] libmachine: (addons-033244) </domain>
	I0116 22:37:19.041320   15752 main.go:141] libmachine: (addons-033244) 
	I0116 22:37:19.046917   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:4f:a3:5b in network default
	I0116 22:37:19.047467   15752 main.go:141] libmachine: (addons-033244) Ensuring networks are active...
	I0116 22:37:19.047488   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:19.048099   15752 main.go:141] libmachine: (addons-033244) Ensuring network default is active
	I0116 22:37:19.048399   15752 main.go:141] libmachine: (addons-033244) Ensuring network mk-addons-033244 is active
	I0116 22:37:19.048948   15752 main.go:141] libmachine: (addons-033244) Getting domain xml...
	I0116 22:37:19.049689   15752 main.go:141] libmachine: (addons-033244) Creating domain...
	I0116 22:37:20.403898   15752 main.go:141] libmachine: (addons-033244) Waiting to get IP...
	I0116 22:37:20.404663   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:20.405067   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:20.405113   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:20.405059   15774 retry.go:31] will retry after 304.019986ms: waiting for machine to come up
	I0116 22:37:20.710657   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:20.711072   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:20.711130   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:20.711057   15774 retry.go:31] will retry after 257.191385ms: waiting for machine to come up
	I0116 22:37:20.970151   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:20.970683   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:20.970715   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:20.970626   15774 retry.go:31] will retry after 467.453605ms: waiting for machine to come up
	I0116 22:37:21.439296   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:21.439740   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:21.439781   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:21.439694   15774 retry.go:31] will retry after 418.575401ms: waiting for machine to come up
	I0116 22:37:21.860346   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:21.860823   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:21.860855   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:21.860768   15774 retry.go:31] will retry after 584.247659ms: waiting for machine to come up
	I0116 22:37:22.446472   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:22.446971   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:22.447002   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:22.446920   15774 retry.go:31] will retry after 811.616016ms: waiting for machine to come up
	I0116 22:37:23.260322   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:23.260732   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:23.260754   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:23.260682   15774 retry.go:31] will retry after 822.600491ms: waiting for machine to come up
	I0116 22:37:24.084601   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:24.085144   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:24.085196   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:24.085082   15774 retry.go:31] will retry after 1.349601347s: waiting for machine to come up
	I0116 22:37:25.435844   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:25.436267   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:25.436300   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:25.436218   15774 retry.go:31] will retry after 1.271820693s: waiting for machine to come up
	I0116 22:37:26.709576   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:26.709987   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:26.710017   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:26.709943   15774 retry.go:31] will retry after 1.837185271s: waiting for machine to come up
	I0116 22:37:28.549035   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:28.549492   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:28.549514   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:28.549443   15774 retry.go:31] will retry after 1.775238293s: waiting for machine to come up
	I0116 22:37:30.326169   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:30.326638   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:30.326670   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:30.326589   15774 retry.go:31] will retry after 2.230254003s: waiting for machine to come up
	I0116 22:37:32.559827   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:32.560304   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:32.560327   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:32.560257   15774 retry.go:31] will retry after 3.280230549s: waiting for machine to come up
	I0116 22:37:35.844169   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:35.844557   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find current IP address of domain addons-033244 in network mk-addons-033244
	I0116 22:37:35.844582   15752 main.go:141] libmachine: (addons-033244) DBG | I0116 22:37:35.844498   15774 retry.go:31] will retry after 5.027393562s: waiting for machine to come up
	I0116 22:37:40.875617   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:40.876082   15752 main.go:141] libmachine: (addons-033244) Found IP for machine: 192.168.39.234
	I0116 22:37:40.876101   15752 main.go:141] libmachine: (addons-033244) Reserving static IP address...
	I0116 22:37:40.876115   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has current primary IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:40.876485   15752 main.go:141] libmachine: (addons-033244) DBG | unable to find host DHCP lease matching {name: "addons-033244", mac: "52:54:00:e3:6a:13", ip: "192.168.39.234"} in network mk-addons-033244
	I0116 22:37:40.945447   15752 main.go:141] libmachine: (addons-033244) DBG | Getting to WaitForSSH function...
	I0116 22:37:40.945476   15752 main.go:141] libmachine: (addons-033244) Reserved static IP address: 192.168.39.234
	I0116 22:37:40.945490   15752 main.go:141] libmachine: (addons-033244) Waiting for SSH to be available...
	I0116 22:37:40.947823   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:40.948076   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:40.948105   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:40.948235   15752 main.go:141] libmachine: (addons-033244) DBG | Using SSH client type: external
	I0116 22:37:40.948265   15752 main.go:141] libmachine: (addons-033244) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa (-rw-------)
	I0116 22:37:40.948297   15752 main.go:141] libmachine: (addons-033244) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 22:37:40.948315   15752 main.go:141] libmachine: (addons-033244) DBG | About to run SSH command:
	I0116 22:37:40.948329   15752 main.go:141] libmachine: (addons-033244) DBG | exit 0
	I0116 22:37:41.082069   15752 main.go:141] libmachine: (addons-033244) DBG | SSH cmd err, output: <nil>: 
	I0116 22:37:41.082322   15752 main.go:141] libmachine: (addons-033244) KVM machine creation complete!
	I0116 22:37:41.082646   15752 main.go:141] libmachine: (addons-033244) Calling .GetConfigRaw
	I0116 22:37:41.083126   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:41.083308   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:41.083434   15752 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 22:37:41.083447   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:37:41.084828   15752 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 22:37:41.084848   15752 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 22:37:41.084856   15752 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 22:37:41.084865   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.086905   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.087292   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.087324   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.087446   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.087647   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.087832   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.087961   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.088114   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:41.088448   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:41.088462   15752 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 22:37:41.197759   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 22:37:41.197782   15752 main.go:141] libmachine: Detecting the provisioner...
	I0116 22:37:41.197791   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.200765   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.201171   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.201201   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.201389   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.201588   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.201796   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.201983   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.202184   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:41.202621   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:41.202637   15752 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 22:37:41.314689   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 22:37:41.314745   15752 main.go:141] libmachine: found compatible host: buildroot
	I0116 22:37:41.314752   15752 main.go:141] libmachine: Provisioning with buildroot...
	I0116 22:37:41.314760   15752 main.go:141] libmachine: (addons-033244) Calling .GetMachineName
	I0116 22:37:41.315052   15752 buildroot.go:166] provisioning hostname "addons-033244"
	I0116 22:37:41.315076   15752 main.go:141] libmachine: (addons-033244) Calling .GetMachineName
	I0116 22:37:41.315251   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.317987   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.318386   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.318417   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.318588   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.318761   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.318970   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.319172   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.319369   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:41.319732   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:41.319746   15752 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-033244 && echo "addons-033244" | sudo tee /etc/hostname
	I0116 22:37:41.442739   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-033244
	
	I0116 22:37:41.442769   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.445128   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.445571   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.445615   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.445795   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.445984   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.446178   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.446329   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.446531   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:41.446834   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:41.446852   15752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-033244' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-033244/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-033244' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 22:37:41.562915   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 22:37:41.562945   15752 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 22:37:41.562977   15752 buildroot.go:174] setting up certificates
	I0116 22:37:41.562990   15752 provision.go:83] configureAuth start
	I0116 22:37:41.562999   15752 main.go:141] libmachine: (addons-033244) Calling .GetMachineName
	I0116 22:37:41.563271   15752 main.go:141] libmachine: (addons-033244) Calling .GetIP
	I0116 22:37:41.565848   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.566270   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.566303   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.566439   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.568484   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.568797   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.568825   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.568976   15752 provision.go:138] copyHostCerts
	I0116 22:37:41.569040   15752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 22:37:41.569154   15752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 22:37:41.569207   15752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 22:37:41.569248   15752 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.addons-033244 san=[192.168.39.234 192.168.39.234 localhost 127.0.0.1 minikube addons-033244]
	I0116 22:37:41.722217   15752 provision.go:172] copyRemoteCerts
	I0116 22:37:41.722270   15752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 22:37:41.722292   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.724813   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.725118   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.725143   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.725318   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.725497   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.725652   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.725802   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:37:41.806916   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 22:37:41.828353   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 22:37:41.849501   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 22:37:41.870075   15752 provision.go:86] duration metric: configureAuth took 307.072696ms
	I0116 22:37:41.870101   15752 buildroot.go:189] setting minikube options for container-runtime
	I0116 22:37:41.870286   15752 config.go:182] Loaded profile config "addons-033244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 22:37:41.870395   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:41.873009   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.873301   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:41.873337   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:41.873523   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:41.873742   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.873894   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:41.874064   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:41.874241   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:41.874605   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:41.874627   15752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 22:37:42.155493   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 22:37:42.155525   15752 main.go:141] libmachine: Checking connection to Docker...
	I0116 22:37:42.155542   15752 main.go:141] libmachine: (addons-033244) Calling .GetURL
	I0116 22:37:42.156890   15752 main.go:141] libmachine: (addons-033244) DBG | Using libvirt version 6000000
	I0116 22:37:42.159363   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.159772   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.159811   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.160010   15752 main.go:141] libmachine: Docker is up and running!
	I0116 22:37:42.160031   15752 main.go:141] libmachine: Reticulating splines...
	I0116 22:37:42.160039   15752 client.go:171] LocalClient.Create took 23.966662431s
	I0116 22:37:42.160064   15752 start.go:167] duration metric: libmachine.API.Create for "addons-033244" took 23.966718458s
	I0116 22:37:42.160083   15752 start.go:300] post-start starting for "addons-033244" (driver="kvm2")
	I0116 22:37:42.160100   15752 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 22:37:42.160121   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:42.160348   15752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 22:37:42.160373   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:42.162616   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.162950   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.162982   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.163100   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:42.163243   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:42.163392   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:42.163500   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:37:42.247418   15752 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 22:37:42.251534   15752 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 22:37:42.251584   15752 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 22:37:42.251656   15752 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 22:37:42.251690   15752 start.go:303] post-start completed in 91.596455ms
	I0116 22:37:42.251728   15752 main.go:141] libmachine: (addons-033244) Calling .GetConfigRaw
	I0116 22:37:42.252208   15752 main.go:141] libmachine: (addons-033244) Calling .GetIP
	I0116 22:37:42.254962   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.255313   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.255327   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.255515   15752 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/config.json ...
	I0116 22:37:42.255692   15752 start.go:128] duration metric: createHost completed in 24.079270592s
	I0116 22:37:42.255710   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:42.257775   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.258034   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.258065   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.258206   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:42.258386   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:42.258540   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:42.258687   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:42.258848   15752 main.go:141] libmachine: Using SSH client type: native
	I0116 22:37:42.259171   15752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 22:37:42.259187   15752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 22:37:42.370696   15752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705444662.342835240
	
	I0116 22:37:42.370721   15752 fix.go:206] guest clock: 1705444662.342835240
	I0116 22:37:42.370731   15752 fix.go:219] Guest: 2024-01-16 22:37:42.34283524 +0000 UTC Remote: 2024-01-16 22:37:42.255702065 +0000 UTC m=+24.188906986 (delta=87.133175ms)
	I0116 22:37:42.370775   15752 fix.go:190] guest clock delta is within tolerance: 87.133175ms
	I0116 22:37:42.370780   15752 start.go:83] releasing machines lock for "addons-033244", held for 24.194426541s
	I0116 22:37:42.370798   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:42.371072   15752 main.go:141] libmachine: (addons-033244) Calling .GetIP
	I0116 22:37:42.373629   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.373927   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.373952   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.374085   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:42.374696   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:42.374902   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:37:42.374995   15752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 22:37:42.375039   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:42.375150   15752 ssh_runner.go:195] Run: cat /version.json
	I0116 22:37:42.375172   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:37:42.377787   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.377811   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.378198   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.378223   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.378254   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:42.378266   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:42.378357   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:42.378466   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:37:42.378577   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:42.378671   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:37:42.378741   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:42.378810   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:37:42.378875   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:37:42.378900   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:37:42.495759   15752 ssh_runner.go:195] Run: systemctl --version
	I0116 22:37:42.501113   15752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 22:37:42.655146   15752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 22:37:42.660716   15752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 22:37:42.660783   15752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 22:37:42.673701   15752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 22:37:42.673731   15752 start.go:475] detecting cgroup driver to use...
	I0116 22:37:42.673806   15752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 22:37:42.690032   15752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 22:37:42.701912   15752 docker.go:217] disabling cri-docker service (if available) ...
	I0116 22:37:42.701968   15752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 22:37:42.714051   15752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 22:37:42.726320   15752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 22:37:42.825486   15752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 22:37:42.939252   15752 docker.go:233] disabling docker service ...
	I0116 22:37:42.939338   15752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 22:37:42.951782   15752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 22:37:42.963014   15752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 22:37:43.060898   15752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 22:37:43.171476   15752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 22:37:43.183965   15752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 22:37:43.200111   15752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 22:37:43.200169   15752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:37:43.209680   15752 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 22:37:43.209737   15752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:37:43.219343   15752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:37:43.229168   15752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:37:43.238275   15752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 22:37:43.247679   15752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 22:37:43.255050   15752 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 22:37:43.255112   15752 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 22:37:43.266881   15752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 22:37:43.275776   15752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 22:37:43.378353   15752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 22:37:43.533783   15752 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 22:37:43.533865   15752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 22:37:43.539392   15752 start.go:543] Will wait 60s for crictl version
	I0116 22:37:43.539457   15752 ssh_runner.go:195] Run: which crictl
	I0116 22:37:43.542533   15752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 22:37:43.576791   15752 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 22:37:43.576892   15752 ssh_runner.go:195] Run: crio --version
	I0116 22:37:43.617570   15752 ssh_runner.go:195] Run: crio --version
	I0116 22:37:43.668242   15752 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 22:37:43.670044   15752 main.go:141] libmachine: (addons-033244) Calling .GetIP
	I0116 22:37:43.672646   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:43.673054   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:37:43.673081   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:37:43.673444   15752 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 22:37:43.677215   15752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 22:37:43.687667   15752 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 22:37:43.687717   15752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 22:37:43.719370   15752 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 22:37:43.719449   15752 ssh_runner.go:195] Run: which lz4
	I0116 22:37:43.722875   15752 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 22:37:43.726361   15752 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 22:37:43.726394   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 22:37:45.275503   15752 crio.go:444] Took 1.552669 seconds to copy over tarball
	I0116 22:37:45.275572   15752 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 22:37:48.315857   15752 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040253387s)
	I0116 22:37:48.315885   15752 crio.go:451] Took 3.040356 seconds to extract the tarball
	I0116 22:37:48.315894   15752 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 22:37:48.355596   15752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 22:37:48.426763   15752 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 22:37:48.426786   15752 cache_images.go:84] Images are preloaded, skipping loading
	I0116 22:37:48.426842   15752 ssh_runner.go:195] Run: crio config
	I0116 22:37:48.482218   15752 cni.go:84] Creating CNI manager for ""
	I0116 22:37:48.482240   15752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:37:48.482259   15752 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 22:37:48.482275   15752 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-033244 NodeName:addons-033244 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 22:37:48.482435   15752 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-033244"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 22:37:48.482539   15752 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-033244 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-033244 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 22:37:48.482595   15752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 22:37:48.491594   15752 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 22:37:48.491663   15752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 22:37:48.499759   15752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0116 22:37:48.514760   15752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 22:37:48.529984   15752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 22:37:48.545033   15752 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0116 22:37:48.548491   15752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 22:37:48.559225   15752 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244 for IP: 192.168.39.234
	I0116 22:37:48.559252   15752 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.559385   15752 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 22:37:48.690547   15752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt ...
	I0116 22:37:48.690575   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt: {Name:mkd018176c883f6b520a46d1b493871665dcfa90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.690723   15752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key ...
	I0116 22:37:48.690734   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key: {Name:mke0e28e4ede6a7dbf3401931bc476667ba82812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.690797   15752 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 22:37:48.826007   15752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt ...
	I0116 22:37:48.826033   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt: {Name:mk9aece32917dd56a564d2b6996e10de6471572f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.826167   15752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key ...
	I0116 22:37:48.826177   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key: {Name:mk1215d20598c1f8115742918555754866ae19d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.826269   15752 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.key
	I0116 22:37:48.826282   15752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt with IP's: []
	I0116 22:37:48.948104   15752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt ...
	I0116 22:37:48.948135   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: {Name:mk607275a56d84bd433ee6650927f4a7b9e146ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.948279   15752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.key ...
	I0116 22:37:48.948289   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.key: {Name:mkfc5491116522ac7bcb77bff2caf7c2b70ca364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:48.948355   15752 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key.51b88da4
	I0116 22:37:48.948371   15752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt.51b88da4 with IP's: [192.168.39.234 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 22:37:49.114926   15752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt.51b88da4 ...
	I0116 22:37:49.114952   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt.51b88da4: {Name:mk9189f6ca3477db376d31e41626d833b7f1ecbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:49.115529   15752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key.51b88da4 ...
	I0116 22:37:49.115544   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key.51b88da4: {Name:mk369607a2c0a160cd306414c184fe66a692d217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:49.115616   15752 certs.go:337] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt.51b88da4 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt
	I0116 22:37:49.115686   15752 certs.go:341] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key.51b88da4 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key
	I0116 22:37:49.115734   15752 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.key
	I0116 22:37:49.115751   15752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.crt with IP's: []
	I0116 22:37:49.269140   15752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.crt ...
	I0116 22:37:49.269180   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.crt: {Name:mkc1c206021f86930b6f59b210e17f8a5701af55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:49.269369   15752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.key ...
	I0116 22:37:49.269387   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.key: {Name:mke2a50420d60da69989c672d6d49e232a1fc097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:37:49.269601   15752 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 22:37:49.269646   15752 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 22:37:49.269689   15752 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 22:37:49.269720   15752 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 22:37:49.270279   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 22:37:49.292905   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 22:37:49.314475   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 22:37:49.335943   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 22:37:49.357046   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 22:37:49.377570   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 22:37:49.397850   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 22:37:49.418861   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 22:37:49.440396   15752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 22:37:49.460835   15752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 22:37:49.475788   15752 ssh_runner.go:195] Run: openssl version
	I0116 22:37:49.480983   15752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 22:37:49.490026   15752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:37:49.494909   15752 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:37:49.494970   15752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:37:49.499850   15752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 22:37:49.509242   15752 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 22:37:49.513163   15752 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 22:37:49.513209   15752 kubeadm.go:404] StartCluster: {Name:addons-033244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-033244 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:37:49.513277   15752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 22:37:49.513320   15752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 22:37:49.549029   15752 cri.go:89] found id: ""
	I0116 22:37:49.549111   15752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 22:37:49.557612   15752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 22:37:49.565651   15752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 22:37:49.573829   15752 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 22:37:49.573871   15752 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 22:37:49.748329   15752 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 22:38:01.808050   15752 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 22:38:01.808121   15752 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 22:38:01.808230   15752 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 22:38:01.808345   15752 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 22:38:01.808459   15752 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 22:38:01.808576   15752 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 22:38:01.810493   15752 out.go:204]   - Generating certificates and keys ...
	I0116 22:38:01.810611   15752 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 22:38:01.810709   15752 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 22:38:01.810812   15752 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 22:38:01.810902   15752 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 22:38:01.810989   15752 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 22:38:01.811056   15752 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 22:38:01.811124   15752 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 22:38:01.811281   15752 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-033244 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0116 22:38:01.811361   15752 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 22:38:01.811499   15752 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-033244 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0116 22:38:01.811582   15752 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 22:38:01.811669   15752 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 22:38:01.811736   15752 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 22:38:01.811811   15752 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 22:38:01.811883   15752 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 22:38:01.811967   15752 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 22:38:01.812055   15752 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 22:38:01.812132   15752 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 22:38:01.812246   15752 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 22:38:01.812347   15752 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 22:38:01.814163   15752 out.go:204]   - Booting up control plane ...
	I0116 22:38:01.814275   15752 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 22:38:01.814392   15752 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 22:38:01.814473   15752 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 22:38:01.814631   15752 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 22:38:01.814747   15752 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 22:38:01.814802   15752 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 22:38:01.814983   15752 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 22:38:01.815091   15752 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002917 seconds
	I0116 22:38:01.815225   15752 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 22:38:01.815403   15752 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 22:38:01.815464   15752 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 22:38:01.815611   15752 kubeadm.go:322] [mark-control-plane] Marking the node addons-033244 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 22:38:01.815665   15752 kubeadm.go:322] [bootstrap-token] Using token: lvrle7.a5ara2xyqn32bwzd
	I0116 22:38:01.817170   15752 out.go:204]   - Configuring RBAC rules ...
	I0116 22:38:01.817248   15752 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 22:38:01.817328   15752 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 22:38:01.817438   15752 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 22:38:01.817539   15752 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 22:38:01.817663   15752 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 22:38:01.817741   15752 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 22:38:01.817865   15752 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 22:38:01.817918   15752 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 22:38:01.817960   15752 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 22:38:01.817973   15752 kubeadm.go:322] 
	I0116 22:38:01.818040   15752 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 22:38:01.818049   15752 kubeadm.go:322] 
	I0116 22:38:01.818132   15752 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 22:38:01.818141   15752 kubeadm.go:322] 
	I0116 22:38:01.818174   15752 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 22:38:01.818252   15752 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 22:38:01.818326   15752 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 22:38:01.818348   15752 kubeadm.go:322] 
	I0116 22:38:01.818421   15752 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 22:38:01.818434   15752 kubeadm.go:322] 
	I0116 22:38:01.818508   15752 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 22:38:01.818524   15752 kubeadm.go:322] 
	I0116 22:38:01.818596   15752 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 22:38:01.818671   15752 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 22:38:01.818758   15752 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 22:38:01.818769   15752 kubeadm.go:322] 
	I0116 22:38:01.818865   15752 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 22:38:01.818975   15752 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 22:38:01.818987   15752 kubeadm.go:322] 
	I0116 22:38:01.819081   15752 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lvrle7.a5ara2xyqn32bwzd \
	I0116 22:38:01.819228   15752 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 22:38:01.819268   15752 kubeadm.go:322] 	--control-plane 
	I0116 22:38:01.819279   15752 kubeadm.go:322] 
	I0116 22:38:01.819359   15752 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 22:38:01.819366   15752 kubeadm.go:322] 
	I0116 22:38:01.819433   15752 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lvrle7.a5ara2xyqn32bwzd \
	I0116 22:38:01.819560   15752 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 22:38:01.819584   15752 cni.go:84] Creating CNI manager for ""
	I0116 22:38:01.819592   15752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:38:01.822503   15752 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 22:38:01.824315   15752 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 22:38:01.849636   15752 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 22:38:01.897388   15752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 22:38:01.897493   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:01.897534   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=addons-033244 minikube.k8s.io/updated_at=2024_01_16T22_38_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:01.951498   15752 ops.go:34] apiserver oom_adj: -16
	I0116 22:38:02.070451   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:02.570991   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:03.070492   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:03.571274   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:04.070704   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:04.571439   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:05.071472   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:05.571128   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:06.070466   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:06.570793   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:07.070732   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:07.570509   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:08.070611   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:08.571029   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:09.070610   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:09.570468   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:10.071479   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:10.570468   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:11.071016   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:11.571094   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:12.071010   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:12.570467   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:13.070563   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:13.571415   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:14.070912   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:14.571447   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:15.071380   15752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:38:15.188461   15752 kubeadm.go:1088] duration metric: took 13.291026133s to wait for elevateKubeSystemPrivileges.
	I0116 22:38:15.188485   15752 kubeadm.go:406] StartCluster complete in 25.675280459s
	I0116 22:38:15.188501   15752 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:38:15.188631   15752 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:38:15.188988   15752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:38:15.189216   15752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 22:38:15.189285   15752 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 22:38:15.189402   15752 addons.go:69] Setting yakd=true in profile "addons-033244"
	I0116 22:38:15.189416   15752 addons.go:69] Setting ingress-dns=true in profile "addons-033244"
	I0116 22:38:15.189437   15752 addons.go:69] Setting inspektor-gadget=true in profile "addons-033244"
	I0116 22:38:15.189448   15752 addons.go:69] Setting storage-provisioner=true in profile "addons-033244"
	I0116 22:38:15.189452   15752 addons.go:69] Setting gcp-auth=true in profile "addons-033244"
	I0116 22:38:15.189463   15752 addons.go:234] Setting addon inspektor-gadget=true in "addons-033244"
	I0116 22:38:15.189474   15752 addons.go:234] Setting addon storage-provisioner=true in "addons-033244"
	I0116 22:38:15.189463   15752 addons.go:69] Setting default-storageclass=true in profile "addons-033244"
	I0116 22:38:15.189480   15752 addons.go:69] Setting ingress=true in profile "addons-033244"
	I0116 22:38:15.189486   15752 config.go:182] Loaded profile config "addons-033244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 22:38:15.189493   15752 addons.go:234] Setting addon ingress=true in "addons-033244"
	I0116 22:38:15.189502   15752 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-033244"
	I0116 22:38:15.189520   15752 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-033244"
	I0116 22:38:15.189528   15752 addons.go:69] Setting volumesnapshots=true in profile "addons-033244"
	I0116 22:38:15.189532   15752 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-033244"
	I0116 22:38:15.189530   15752 addons.go:69] Setting metrics-server=true in profile "addons-033244"
	I0116 22:38:15.189560   15752 addons.go:234] Setting addon metrics-server=true in "addons-033244"
	I0116 22:38:15.189560   15752 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-033244"
	I0116 22:38:15.189439   15752 addons.go:234] Setting addon yakd=true in "addons-033244"
	I0116 22:38:15.189588   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.189603   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.189515   15752 addons.go:69] Setting cloud-spanner=true in profile "addons-033244"
	I0116 22:38:15.189469   15752 mustload.go:65] Loading cluster: addons-033244
	I0116 22:38:15.189618   15752 addons.go:234] Setting addon cloud-spanner=true in "addons-033244"
	I0116 22:38:15.189646   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.189507   15752 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-033244"
	I0116 22:38:15.189545   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.189519   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.189790   15752 config.go:182] Loaded profile config "addons-033244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 22:38:15.190035   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190057   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190065   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190039   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190083   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.189604   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.190104   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.190110   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.189424   15752 addons.go:69] Setting registry=true in profile "addons-033244"
	I0116 22:38:15.190129   15752 addons.go:234] Setting addon registry=true in "addons-033244"
	I0116 22:38:15.189549   15752 addons.go:234] Setting addon volumesnapshots=true in "addons-033244"
	I0116 22:38:15.189519   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.190180   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190211   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.190264   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190300   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.189475   15752 addons.go:69] Setting helm-tiller=true in profile "addons-033244"
	I0116 22:38:15.190368   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.190395   15752 addons.go:234] Setting addon helm-tiller=true in "addons-033244"
	I0116 22:38:15.189446   15752 addons.go:234] Setting addon ingress-dns=true in "addons-033244"
	I0116 22:38:15.190463   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.190489   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190512   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.190562   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.190636   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.190039   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.190771   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.189532   15752 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-033244"
	I0116 22:38:15.189562   15752 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-033244"
	I0116 22:38:15.190889   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.191024   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191049   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191125   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.191241   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191267   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191324   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191343   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191389   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191407   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191417   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191434   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191452   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191466   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.191471   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.191489   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.210503   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0116 22:38:15.211426   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.211467   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0116 22:38:15.215332   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.215352   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.215359   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.215895   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.216005   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.216064   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.216100   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.216610   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.217211   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.217442   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.217752   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0116 22:38:15.217888   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0116 22:38:15.218137   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.218265   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.218576   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.219147   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.219169   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.218724   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.219363   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.219699   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.219835   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.219848   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.220372   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.220400   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.221431   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0116 22:38:15.221862   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.222313   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.222367   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.222957   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.223456   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.223501   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.223810   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.224313   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.224375   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.231967   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0116 22:38:15.232154   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0116 22:38:15.232611   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.232735   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0116 22:38:15.232884   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.233047   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.233060   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.233446   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.233610   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.233623   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.233682   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.234138   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.234215   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.235568   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.235642   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.235570   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.235739   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.236820   15752 addons.go:234] Setting addon default-storageclass=true in "addons-033244"
	I0116 22:38:15.236862   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.237262   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.237282   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.237739   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0116 22:38:15.237831   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.238187   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.238640   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.238673   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.239307   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.239325   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.239668   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.240116   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.240144   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.240450   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I0116 22:38:15.240951   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.241380   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.241395   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.241668   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.241793   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.260899   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0116 22:38:15.261887   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.262435   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.262462   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.262883   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.264039   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.264069   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.264258   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0116 22:38:15.264725   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.265220   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.265233   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.265651   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0116 22:38:15.265802   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.265990   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.266082   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.266494   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.266508   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.266896   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.267456   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.267502   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.267630   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.269610   15752 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0116 22:38:15.268468   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0116 22:38:15.269403   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I0116 22:38:15.271296   15752 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 22:38:15.271308   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 22:38:15.271328   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.272470   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.272837   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.273150   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.273167   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.273313   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.273335   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.273589   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.273653   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.273772   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.273803   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.274496   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.275085   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.275115   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.275235   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.275397   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.275495   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.275588   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.277615   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.277687   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0116 22:38:15.279922   15752 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 22:38:15.278225   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.278925   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0116 22:38:15.279160   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0116 22:38:15.279588   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.281490   15752 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 22:38:15.281510   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 22:38:15.281529   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.283835   15752 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 22:38:15.285226   15752 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 22:38:15.285246   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 22:38:15.285263   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.284366   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0116 22:38:15.282315   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0116 22:38:15.282634   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.282690   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.284394   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0116 22:38:15.282195   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.285592   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.284611   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38877
	I0116 22:38:15.286074   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.286374   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.286433   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.286467   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.286676   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.286737   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.286753   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.286782   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0116 22:38:15.287019   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.287033   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.287147   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.287158   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.287227   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.287787   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.287847   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.287890   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.288015   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.288037   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.288109   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.288165   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.288244   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.288317   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.288523   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.288671   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.288684   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.288748   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.288864   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.288875   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.288934   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.289188   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.289239   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.289853   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.289866   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.289924   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.289967   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.290232   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.292064   15752 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 22:38:15.290456   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.290842   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.290952   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.290975   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.291074   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.291657   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.291728   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0116 22:38:15.292195   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.292448   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.293991   15752 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-033244"
	I0116 22:38:15.294019   15752 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 22:38:15.294028   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:15.294047   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.294072   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.294093   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.294446   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.294469   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.294486   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.294029   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 22:38:15.294517   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.294534   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.295105   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.296496   15752 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 22:38:15.295161   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
	I0116 22:38:15.295236   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.295516   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.295832   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.296330   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.297424   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.298062   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.298381   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.298610   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.299565   15752 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 22:38:15.299600   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.299622   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.299644   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.299790   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.300041   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.300815   15752 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 22:38:15.300824   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.300887   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.301424   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.301516   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.301563   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.302199   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I0116 22:38:15.302498   15752 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 22:38:15.302825   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.303688   15752 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 22:38:15.303701   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 22:38:15.303718   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.303723   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.303755   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.304216   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.306322   15752 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 22:38:15.306347   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 22:38:15.306366   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.305182   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.305820   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.306685   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.307145   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.307163   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.307223   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.307761   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.309319   15752 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 22:38:15.307932   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	W0116 22:38:15.309726   15752 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-033244" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0116 22:38:15.310061   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.310624   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.311940   15752 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 22:38:15.310949   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	E0116 22:38:15.310944   15752 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0116 22:38:15.311483   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.312373   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.312544   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.312628   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.313168   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.313273   15752 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 22:38:15.313283   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 22:38:15.313297   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.313342   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.313368   15752 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 22:38:15.315102   15752 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 22:38:15.312890   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.313887   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.316203   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.316861   15752 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 22:38:15.316873   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 22:38:15.316890   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.316947   15752 out.go:177] * Verifying Kubernetes components...
	I0116 22:38:15.318299   15752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 22:38:15.316998   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0116 22:38:15.317094   15752 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 22:38:15.317350   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.317387   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.317535   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.317759   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.318716   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.319729   15752 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 22:38:15.319745   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 22:38:15.319762   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.319762   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.320030   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.320042   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.320076   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.320085   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.320295   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.320315   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.320383   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.320404   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.320426   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.320596   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.320602   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.320768   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.320775   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.320811   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.320905   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.320960   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.321062   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.321576   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I0116 22:38:15.321995   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.322974   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.323191   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.324797   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 22:38:15.323525   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.323680   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.323859   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.326186   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.326189   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.327884   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 22:38:15.326288   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0116 22:38:15.326400   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.326630   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.330386   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 22:38:15.329446   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.329690   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.329720   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:15.333139   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 22:38:15.333215   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
	I0116 22:38:15.332038   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.332130   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.331626   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:15.335019   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 22:38:15.336595   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 22:38:15.335551   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.335584   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.336847   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0116 22:38:15.339024   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 22:38:15.338106   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.338125   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.338146   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.340248   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 22:38:15.341465   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 22:38:15.341479   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 22:38:15.341491   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.340315   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.340486   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.340590   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.341560   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.341893   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.341898   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.342087   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.342696   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.344313   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.346269   15752 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 22:38:15.345060   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.345086   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.345654   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.346245   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.347710   15752 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 22:38:15.347726   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 22:38:15.347745   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.347834   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.347858   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.349595   15752 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 22:38:15.348269   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.348336   15752 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 22:38:15.351790   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.351808   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.352822   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.352843   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.352892   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 22:38:15.352908   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 22:38:15.352925   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.352990   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 22:38:15.353004   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.353164   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.353215   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.353316   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.353703   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.353828   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.356189   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0116 22:38:15.356543   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:15.356628   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.356674   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.357045   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:15.357065   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:15.357090   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.357110   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.357127   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.357144   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.357178   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.357335   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.357389   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:15.357429   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.357565   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.357594   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.357662   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.357720   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.357808   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.358297   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:15.359745   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:15.361791   15752 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 22:38:15.363374   15752 out.go:177]   - Using image docker.io/busybox:stable
	I0116 22:38:15.364857   15752 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 22:38:15.364871   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 22:38:15.364888   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:15.367509   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.367848   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:15.367905   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:15.368014   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:15.368159   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:15.368300   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:15.368377   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:15.526400   15752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 22:38:15.530297   15752 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 22:38:15.530320   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 22:38:15.543400   15752 node_ready.go:35] waiting up to 6m0s for node "addons-033244" to be "Ready" ...
	I0116 22:38:15.563629   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 22:38:15.573592   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 22:38:15.604101   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 22:38:15.612440   15752 node_ready.go:49] node "addons-033244" has status "Ready":"True"
	I0116 22:38:15.612465   15752 node_ready.go:38] duration metric: took 69.017723ms waiting for node "addons-033244" to be "Ready" ...
	I0116 22:38:15.612474   15752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 22:38:15.652692   15752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:15.688234   15752 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 22:38:15.688264   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 22:38:15.712990   15752 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 22:38:15.713016   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 22:38:15.758492   15752 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 22:38:15.758519   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 22:38:15.762734   15752 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 22:38:15.762752   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 22:38:15.766873   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 22:38:15.770506   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 22:38:15.774188   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 22:38:15.774212   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 22:38:15.777065   15752 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 22:38:15.777094   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 22:38:15.784234   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 22:38:15.786671   15752 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 22:38:15.786690   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 22:38:15.926696   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 22:38:16.099259   15752 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 22:38:16.099283   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 22:38:16.105097   15752 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 22:38:16.105114   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 22:38:16.109545   15752 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 22:38:16.109566   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 22:38:16.164209   15752 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 22:38:16.164229   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 22:38:16.187397   15752 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 22:38:16.187418   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 22:38:16.195439   15752 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 22:38:16.195462   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 22:38:16.205948   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 22:38:16.205971   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 22:38:16.438925   15752 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 22:38:16.438953   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 22:38:16.450593   15752 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 22:38:16.450617   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 22:38:16.452447   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 22:38:16.471980   15752 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 22:38:16.472001   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 22:38:16.472392   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 22:38:16.477124   15752 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 22:38:16.477154   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 22:38:16.479361   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 22:38:16.479388   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 22:38:16.549143   15752 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 22:38:16.549164   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 22:38:16.563998   15752 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 22:38:16.564017   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 22:38:16.633474   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 22:38:16.647249   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 22:38:16.647271   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 22:38:16.647690   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 22:38:16.647706   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 22:38:16.656791   15752 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 22:38:16.656809   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 22:38:16.666200   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 22:38:16.711522   15752 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 22:38:16.711562   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 22:38:16.723857   15752 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 22:38:16.723879   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 22:38:16.739069   15752 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 22:38:16.739104   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 22:38:16.808361   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 22:38:16.812607   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 22:38:16.824657   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 22:38:16.824687   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 22:38:16.941509   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 22:38:16.941536   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 22:38:17.022006   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 22:38:17.022028   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 22:38:17.070707   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 22:38:17.070730   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 22:38:17.100289   15752 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 22:38:17.100307   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 22:38:17.131525   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 22:38:18.447141   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:20.364427   15752 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.837992997s)
	I0116 22:38:20.364457   15752 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 22:38:20.890323   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:21.290129   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.726470583s)
	I0116 22:38:21.290168   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:21.290177   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:21.290189   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.716560486s)
	I0116 22:38:21.290223   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:21.290238   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:21.290481   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:21.290515   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:21.290522   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:21.290541   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:21.290551   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:21.290628   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:21.290646   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:21.290659   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:21.290669   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:21.292326   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:21.292326   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:21.292352   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:21.292359   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:21.292353   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:21.292370   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:22.713441   15752 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 22:38:22.713475   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:22.716492   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:22.717024   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:22.717060   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:22.717291   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:22.717525   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:22.717721   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:22.717899   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:22.861338   15752 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 22:38:22.908587   15752 addons.go:234] Setting addon gcp-auth=true in "addons-033244"
	I0116 22:38:22.908661   15752 host.go:66] Checking if "addons-033244" exists ...
	I0116 22:38:22.909120   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:22.909168   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:22.924731   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0116 22:38:22.925177   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:22.925668   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:22.925695   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:22.926051   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:22.926694   15752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:38:22.926732   15752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:38:22.941078   15752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0116 22:38:22.941483   15752 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:38:22.941963   15752 main.go:141] libmachine: Using API Version  1
	I0116 22:38:22.941982   15752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:38:22.942258   15752 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:38:22.942445   15752 main.go:141] libmachine: (addons-033244) Calling .GetState
	I0116 22:38:22.944083   15752 main.go:141] libmachine: (addons-033244) Calling .DriverName
	I0116 22:38:22.944297   15752 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 22:38:22.944316   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHHostname
	I0116 22:38:22.947476   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:22.948048   15752 main.go:141] libmachine: (addons-033244) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6a:13", ip: ""} in network mk-addons-033244: {Iface:virbr1 ExpiryTime:2024-01-16 23:37:33 +0000 UTC Type:0 Mac:52:54:00:e3:6a:13 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-033244 Clientid:01:52:54:00:e3:6a:13}
	I0116 22:38:22.948102   15752 main.go:141] libmachine: (addons-033244) DBG | domain addons-033244 has defined IP address 192.168.39.234 and MAC address 52:54:00:e3:6a:13 in network mk-addons-033244
	I0116 22:38:22.948191   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHPort
	I0116 22:38:22.948388   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHKeyPath
	I0116 22:38:22.948542   15752 main.go:141] libmachine: (addons-033244) Calling .GetSSHUsername
	I0116 22:38:22.948727   15752 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/addons-033244/id_rsa Username:docker}
	I0116 22:38:23.217589   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:24.598224   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.831319236s)
	I0116 22:38:24.598273   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.99414352s)
	I0116 22:38:24.598289   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598288   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.827754471s)
	I0116 22:38:24.598303   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598305   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598317   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598331   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.814066172s)
	I0116 22:38:24.598357   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.671637847s)
	I0116 22:38:24.598412   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598428   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598431   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.145952176s)
	I0116 22:38:24.598370   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598474   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598499   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.126085198s)
	I0116 22:38:24.598520   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598532   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598316   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598585   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598607   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.965084376s)
	I0116 22:38:24.598459   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598623   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598706   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.932473691s)
	I0116 22:38:24.598732   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598745   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598774   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.598781   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.598791   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.598801   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.598802   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598810   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.598812   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598819   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598827   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.598872   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.598880   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.598878   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.790486534s)
	I0116 22:38:24.598889   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.598897   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	W0116 22:38:24.598907   15752 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 22:38:24.598926   15752 retry.go:31] will retry after 188.508849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 22:38:24.598992   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.786354277s)
	I0116 22:38:24.599009   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.599019   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.599123   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.599151   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.599162   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.599170   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.599180   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.599211   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.599239   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.599249   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.599258   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.599267   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.599312   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.599333   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.599341   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.599476   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.599499   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.599505   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.599675   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.599699   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.599708   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.600718   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.600722   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.600732   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.600749   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.600754   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.600759   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.600763   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.600769   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.600772   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.600778   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.600781   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.600840   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.600859   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.600868   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.600876   15752 addons.go:470] Verifying addon ingress=true in "addons-033244"
	I0116 22:38:24.603769   15752 out.go:177] * Verifying ingress addon...
	I0116 22:38:24.601107   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.601135   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.601453   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.602036   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.602054   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.602079   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.602097   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.602116   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.602252   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.602511   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.605806   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.605838   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.605856   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.605866   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.605870   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.605876   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.605880   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.605885   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.605888   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.605858   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.605931   15752 addons.go:470] Verifying addon registry=true in "addons-033244"
	I0116 22:38:24.607696   15752 out.go:177] * Verifying registry addon...
	I0116 22:38:24.606082   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.606111   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.606213   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.606233   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.606452   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.606493   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.606752   15752 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 22:38:24.607806   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.607820   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.607828   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.607840   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.607850   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.610669   15752 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-033244 service yakd-dashboard -n yakd-dashboard
	
	I0116 22:38:24.609437   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.609472   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.610163   15752 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 22:38:24.612043   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.612068   15752 addons.go:470] Verifying addon metrics-server=true in "addons-033244"
	I0116 22:38:24.640907   15752 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 22:38:24.640929   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:24.649103   15752 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 22:38:24.649191   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:24.665211   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.665236   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.665477   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.665521   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	W0116 22:38:24.665622   15752 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 22:38:24.669977   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:24.670000   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:24.670265   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:24.670286   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:24.670286   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:24.788517   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 22:38:25.180071   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:25.195904   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:25.236784   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:25.443672   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.312095892s)
	I0116 22:38:25.443729   15752 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.499415022s)
	I0116 22:38:25.443732   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:25.443746   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:25.445837   15752 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 22:38:25.444048   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:25.444077   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:25.447497   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:25.448980   15752 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 22:38:25.447516   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:25.450435   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:25.450498   15752 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 22:38:25.450521   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 22:38:25.450745   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:25.450803   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:25.450824   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:25.450835   15752 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-033244"
	I0116 22:38:25.452435   15752 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 22:38:25.454539   15752 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 22:38:25.497216   15752 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 22:38:25.497244   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 22:38:25.517995   15752 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 22:38:25.518015   15752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 22:38:25.557685   15752 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 22:38:25.557716   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:25.633618   15752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 22:38:25.643296   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:25.691577   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:25.966896   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:26.112859   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:26.122763   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:26.463888   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:26.620835   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:26.620893   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:26.963842   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:27.136358   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:27.140079   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:27.174573   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.38600775s)
	I0116 22:38:27.174626   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:27.174639   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:27.174911   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:27.174964   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:27.174974   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:27.174986   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:27.174992   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:27.175308   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:27.175349   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:27.175363   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:27.521818   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:27.553643   15752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.919978934s)
	I0116 22:38:27.553694   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:27.553706   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:27.554016   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:27.554039   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:27.554101   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:27.554224   15752 main.go:141] libmachine: Making call to close driver server
	I0116 22:38:27.554248   15752 main.go:141] libmachine: (addons-033244) Calling .Close
	I0116 22:38:27.554483   15752 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:38:27.554499   15752 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:38:27.554540   15752 main.go:141] libmachine: (addons-033244) DBG | Closing plugin on server side
	I0116 22:38:27.555549   15752 addons.go:470] Verifying addon gcp-auth=true in "addons-033244"
	I0116 22:38:27.557304   15752 out.go:177] * Verifying gcp-auth addon...
	I0116 22:38:27.559846   15752 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 22:38:27.571920   15752 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 22:38:27.571939   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:27.656633   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:27.664032   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:27.673933   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:27.965059   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:28.064470   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:28.113046   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:28.116624   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:28.478077   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:28.566023   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:28.612092   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:28.616026   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:28.978270   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:29.064707   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:29.118868   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:29.119776   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:29.476039   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:29.564975   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:29.613726   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:29.616688   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:29.961914   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:30.063425   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:30.112249   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:30.116409   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:30.159857   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:30.461402   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:30.564221   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:30.612254   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:30.616096   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:30.962016   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:31.063370   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:31.112719   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:31.116414   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:31.461305   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:31.563830   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:31.612259   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:31.616286   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:31.965947   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:32.073242   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:32.125161   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:32.138495   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:32.176933   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:32.461110   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:32.565385   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:32.619533   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:32.621350   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:32.965294   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:33.072637   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:33.114089   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:33.123141   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:33.459856   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:33.575984   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:33.614124   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:33.622593   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:33.964271   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:34.071267   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:34.115283   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:34.124643   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:34.475367   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:34.570604   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:34.612365   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:34.621103   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:34.669447   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:34.962014   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:35.064591   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:35.114584   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:35.117080   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:35.462722   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:35.569971   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:35.615482   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:35.620689   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:35.961983   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:36.063741   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:36.112278   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:36.116548   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:36.580123   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:36.582402   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:36.614635   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:36.618580   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:36.691962   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:36.960230   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:37.063885   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:37.119935   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:37.120519   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:37.465378   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:37.563857   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:37.615871   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:37.619984   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:37.961170   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:38.064568   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:38.113227   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:38.116056   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:38.461040   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:38.564334   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:38.612711   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:38.617037   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:38.960851   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:39.063655   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:39.112714   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:39.116836   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:39.160195   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:39.461155   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:39.564404   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:39.612993   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:39.616247   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:39.967073   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:40.065093   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:40.112471   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:40.116909   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:40.461923   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:40.564046   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:40.614312   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:40.617240   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:40.960840   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:41.065142   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:41.112032   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:41.115979   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:41.462015   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:41.567120   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:41.614442   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:41.617213   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:41.659385   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:41.960490   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:42.066353   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:42.113410   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:42.116681   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:42.460375   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:42.929601   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:42.930037   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:42.930320   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:42.963133   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:43.064079   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:43.111954   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:43.117143   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:43.460585   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:43.564131   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:43.612021   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:43.615974   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:43.960792   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:44.063701   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:44.113331   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:44.118221   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:44.159845   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:44.491190   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:44.566852   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:44.615197   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:44.619276   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:44.960645   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:45.063500   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:45.112478   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:45.116553   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:45.460457   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:45.565076   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:45.613212   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:45.617455   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:45.960530   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:46.063463   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:46.112948   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:46.116734   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:46.461864   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:46.563660   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:46.614012   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:46.617877   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:46.659870   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:46.962055   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:47.391045   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:47.395384   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:47.399998   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:47.462087   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:47.564196   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:47.612035   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:47.616159   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:47.960795   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:48.064010   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:48.112548   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:48.117273   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:48.463533   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:48.565619   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:48.611953   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:48.616331   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:48.660111   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:48.961226   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:49.065932   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:49.118691   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:49.122461   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:49.461128   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:49.563884   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:49.614994   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:49.629624   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:49.960737   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:50.065241   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:50.112617   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:50.121256   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:50.462426   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:50.565001   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:50.613326   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:50.616490   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:51.125523   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:51.198806   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:51.199268   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:51.201294   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:51.202927   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:51.460778   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:51.563966   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:51.611929   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:51.620604   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:51.959494   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:52.065613   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:52.112943   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:52.117247   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:52.461127   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:52.564007   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:52.614231   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:52.617770   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:52.962211   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:53.064087   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:53.112258   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:53.117320   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:53.465632   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:53.564152   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:53.612803   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:53.617227   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:53.659653   15752 pod_ready.go:102] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"False"
	I0116 22:38:53.960593   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:54.064452   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:54.112812   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:54.116227   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:54.461007   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:54.563791   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:54.613377   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:54.616779   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:54.960903   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:55.064579   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:55.113031   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:55.116314   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:55.460386   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:55.564324   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:55.613069   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:55.616635   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:56.055705   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:56.083768   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:56.112899   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:56.116185   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:56.158566   15752 pod_ready.go:92] pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.158587   15752 pod_ready.go:81] duration metric: took 40.505850999s waiting for pod "coredns-5dd5756b68-dw95f" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.158596   15752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rc48z" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.460773   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:56.563148   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:56.612294   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:56.616814   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:56.666289   15752 pod_ready.go:92] pod "coredns-5dd5756b68-rc48z" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.666317   15752 pod_ready.go:81] duration metric: took 507.713399ms waiting for pod "coredns-5dd5756b68-rc48z" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.666330   15752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.672850   15752 pod_ready.go:92] pod "etcd-addons-033244" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.672878   15752 pod_ready.go:81] duration metric: took 6.522288ms waiting for pod "etcd-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.672890   15752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.678837   15752 pod_ready.go:92] pod "kube-apiserver-addons-033244" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.678865   15752 pod_ready.go:81] duration metric: took 5.967493ms waiting for pod "kube-apiserver-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.678877   15752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.686432   15752 pod_ready.go:92] pod "kube-controller-manager-addons-033244" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.686461   15752 pod_ready.go:81] duration metric: took 7.574484ms waiting for pod "kube-controller-manager-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.686473   15752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blz7c" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.956720   15752 pod_ready.go:92] pod "kube-proxy-blz7c" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:56.956743   15752 pod_ready.go:81] duration metric: took 270.262757ms waiting for pod "kube-proxy-blz7c" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.956752   15752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:56.961172   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:57.063901   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:57.113846   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:57.120835   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:57.358054   15752 pod_ready.go:92] pod "kube-scheduler-addons-033244" in "kube-system" namespace has status "Ready":"True"
	I0116 22:38:57.358077   15752 pod_ready.go:81] duration metric: took 401.319945ms waiting for pod "kube-scheduler-addons-033244" in "kube-system" namespace to be "Ready" ...
	I0116 22:38:57.358086   15752 pod_ready.go:38] duration metric: took 41.745601393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 22:38:57.358102   15752 api_server.go:52] waiting for apiserver process to appear ...
	I0116 22:38:57.358158   15752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 22:38:57.396650   15752 api_server.go:72] duration metric: took 42.08325361s to wait for apiserver process to appear ...
	I0116 22:38:57.396693   15752 api_server.go:88] waiting for apiserver healthz status ...
	I0116 22:38:57.396710   15752 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0116 22:38:57.402646   15752 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0116 22:38:57.404050   15752 api_server.go:141] control plane version: v1.28.4
	I0116 22:38:57.404080   15752 api_server.go:131] duration metric: took 7.37933ms to wait for apiserver health ...
	I0116 22:38:57.404092   15752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 22:38:57.464365   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:57.563748   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:57.564253   15752 system_pods.go:59] 19 kube-system pods found
	I0116 22:38:57.564289   15752 system_pods.go:61] "coredns-5dd5756b68-dw95f" [5e9d7112-e3af-448f-9c2e-381418ae9772] Running
	I0116 22:38:57.564295   15752 system_pods.go:61] "coredns-5dd5756b68-rc48z" [a381f197-a8f5-4bd1-a31a-dcb532ccfbcc] Running
	I0116 22:38:57.564306   15752 system_pods.go:61] "csi-hostpath-attacher-0" [4eb0b40d-12d0-48ee-9115-f9e562dc12f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 22:38:57.564312   15752 system_pods.go:61] "csi-hostpath-resizer-0" [abebfb0b-e396-44e2-aaa1-f9dbcc236b14] Running
	I0116 22:38:57.564320   15752 system_pods.go:61] "csi-hostpathplugin-jpdn5" [a047b2c2-4066-45a4-94f5-ed447633b65a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 22:38:57.564326   15752 system_pods.go:61] "etcd-addons-033244" [233547c7-d1e8-4314-8ffc-708de57fb5e8] Running
	I0116 22:38:57.564337   15752 system_pods.go:61] "kube-apiserver-addons-033244" [00b0b3b4-4c71-4bb3-8c88-3fa13fccfa23] Running
	I0116 22:38:57.564342   15752 system_pods.go:61] "kube-controller-manager-addons-033244" [2960bb07-6d4d-4aee-b392-0640e28183e5] Running
	I0116 22:38:57.564348   15752 system_pods.go:61] "kube-ingress-dns-minikube" [6be1d5f8-53b6-49da-8a81-309c4abeab7b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 22:38:57.564353   15752 system_pods.go:61] "kube-proxy-blz7c" [62a02cd2-8d73-4efd-a0aa-65632646bc9b] Running
	I0116 22:38:57.564359   15752 system_pods.go:61] "kube-scheduler-addons-033244" [998a5b29-76a5-4ebe-96af-23ec387fb78e] Running
	I0116 22:38:57.564365   15752 system_pods.go:61] "metrics-server-7c66d45ddc-khssf" [2ccf4b20-f4de-4a17-8529-1399f0552a28] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 22:38:57.564369   15752 system_pods.go:61] "nvidia-device-plugin-daemonset-44vfc" [9c7e0da1-cb2d-4e04-bc03-be4506fab2af] Running
	I0116 22:38:57.564378   15752 system_pods.go:61] "registry-b9qhk" [9a6a8c0d-3a15-42ec-8b4e-a34e508c3590] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 22:38:57.564384   15752 system_pods.go:61] "registry-proxy-lzc5j" [3aa946b0-5483-4ad8-82c0-c41ab2daa594] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 22:38:57.564392   15752 system_pods.go:61] "snapshot-controller-58dbcc7b99-9njnp" [a05df66e-feb0-43f4-a4e4-fe2d0fc2fba6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 22:38:57.564402   15752 system_pods.go:61] "snapshot-controller-58dbcc7b99-bc62n" [d11cd145-5cdb-4004-a9ff-7954d44c79ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 22:38:57.564407   15752 system_pods.go:61] "storage-provisioner" [57e7e896-ac59-44b4-a9c4-5606326f8d86] Running
	I0116 22:38:57.564413   15752 system_pods.go:61] "tiller-deploy-7b677967b9-29mdf" [a3510868-5d9a-481c-a603-e4f068e40e0b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 22:38:57.564420   15752 system_pods.go:74] duration metric: took 160.322235ms to wait for pod list to return data ...
	I0116 22:38:57.564430   15752 default_sa.go:34] waiting for default service account to be created ...
	I0116 22:38:57.613334   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:57.617032   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:57.758410   15752 default_sa.go:45] found service account: "default"
	I0116 22:38:57.758443   15752 default_sa.go:55] duration metric: took 194.004847ms for default service account to be created ...
	I0116 22:38:57.758453   15752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 22:38:57.959529   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:57.972035   15752 system_pods.go:86] 19 kube-system pods found
	I0116 22:38:57.972069   15752 system_pods.go:89] "coredns-5dd5756b68-dw95f" [5e9d7112-e3af-448f-9c2e-381418ae9772] Running
	I0116 22:38:57.972078   15752 system_pods.go:89] "coredns-5dd5756b68-rc48z" [a381f197-a8f5-4bd1-a31a-dcb532ccfbcc] Running
	I0116 22:38:57.972089   15752 system_pods.go:89] "csi-hostpath-attacher-0" [4eb0b40d-12d0-48ee-9115-f9e562dc12f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 22:38:57.972097   15752 system_pods.go:89] "csi-hostpath-resizer-0" [abebfb0b-e396-44e2-aaa1-f9dbcc236b14] Running
	I0116 22:38:57.972114   15752 system_pods.go:89] "csi-hostpathplugin-jpdn5" [a047b2c2-4066-45a4-94f5-ed447633b65a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 22:38:57.972122   15752 system_pods.go:89] "etcd-addons-033244" [233547c7-d1e8-4314-8ffc-708de57fb5e8] Running
	I0116 22:38:57.972131   15752 system_pods.go:89] "kube-apiserver-addons-033244" [00b0b3b4-4c71-4bb3-8c88-3fa13fccfa23] Running
	I0116 22:38:57.972142   15752 system_pods.go:89] "kube-controller-manager-addons-033244" [2960bb07-6d4d-4aee-b392-0640e28183e5] Running
	I0116 22:38:57.972152   15752 system_pods.go:89] "kube-ingress-dns-minikube" [6be1d5f8-53b6-49da-8a81-309c4abeab7b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 22:38:57.972159   15752 system_pods.go:89] "kube-proxy-blz7c" [62a02cd2-8d73-4efd-a0aa-65632646bc9b] Running
	I0116 22:38:57.972168   15752 system_pods.go:89] "kube-scheduler-addons-033244" [998a5b29-76a5-4ebe-96af-23ec387fb78e] Running
	I0116 22:38:57.972177   15752 system_pods.go:89] "metrics-server-7c66d45ddc-khssf" [2ccf4b20-f4de-4a17-8529-1399f0552a28] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 22:38:57.972193   15752 system_pods.go:89] "nvidia-device-plugin-daemonset-44vfc" [9c7e0da1-cb2d-4e04-bc03-be4506fab2af] Running
	I0116 22:38:57.972203   15752 system_pods.go:89] "registry-b9qhk" [9a6a8c0d-3a15-42ec-8b4e-a34e508c3590] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 22:38:57.972213   15752 system_pods.go:89] "registry-proxy-lzc5j" [3aa946b0-5483-4ad8-82c0-c41ab2daa594] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 22:38:57.972223   15752 system_pods.go:89] "snapshot-controller-58dbcc7b99-9njnp" [a05df66e-feb0-43f4-a4e4-fe2d0fc2fba6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 22:38:57.972234   15752 system_pods.go:89] "snapshot-controller-58dbcc7b99-bc62n" [d11cd145-5cdb-4004-a9ff-7954d44c79ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 22:38:57.972242   15752 system_pods.go:89] "storage-provisioner" [57e7e896-ac59-44b4-a9c4-5606326f8d86] Running
	I0116 22:38:57.972251   15752 system_pods.go:89] "tiller-deploy-7b677967b9-29mdf" [a3510868-5d9a-481c-a603-e4f068e40e0b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 22:38:57.972263   15752 system_pods.go:126] duration metric: took 213.804571ms to wait for k8s-apps to be running ...
	I0116 22:38:57.972272   15752 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 22:38:57.972328   15752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 22:38:57.997370   15752 system_svc.go:56] duration metric: took 25.091136ms WaitForService to wait for kubelet.
	I0116 22:38:57.997404   15752 kubeadm.go:581] duration metric: took 42.684014234s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 22:38:57.997423   15752 node_conditions.go:102] verifying NodePressure condition ...
	I0116 22:38:58.064940   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:58.112506   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:58.117181   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:58.156856   15752 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 22:38:58.156887   15752 node_conditions.go:123] node cpu capacity is 2
	I0116 22:38:58.156899   15752 node_conditions.go:105] duration metric: took 159.471959ms to run NodePressure ...
	I0116 22:38:58.156910   15752 start.go:228] waiting for startup goroutines ...
	I0116 22:38:58.465696   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:58.566397   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:58.613399   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:58.617274   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:58.961165   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:59.064505   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:59.115287   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:59.117364   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:59.461060   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:38:59.566856   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:38:59.612837   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:38:59.618531   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:38:59.961170   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:00.063907   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:00.113060   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:00.116055   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:00.460534   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:00.564825   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:00.613836   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:00.616878   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:00.960652   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:01.063922   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:01.122765   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:01.126036   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:01.461910   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:01.565115   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:01.612612   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:01.618376   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:01.969863   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:02.073438   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:02.149334   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:02.159620   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:02.462236   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:02.564898   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:02.613639   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:02.619603   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:02.961924   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:03.065245   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:03.112443   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:03.118120   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:03.461466   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:03.564927   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:03.614811   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:03.618744   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:03.960731   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:04.066661   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:04.112620   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:04.116888   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:04.460061   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:04.564374   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:04.612147   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:04.615839   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:04.973580   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:05.064395   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:05.112243   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:05.116277   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:05.460870   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:05.680378   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:05.680671   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:05.682680   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:05.960175   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:06.065152   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:06.112579   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:06.117073   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:06.460455   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:06.563364   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:06.612586   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:06.616459   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:06.962570   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:07.077717   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:07.112551   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:07.117295   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:07.461026   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:07.566686   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:07.612856   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:07.616010   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:07.961043   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:08.064044   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:08.112663   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:08.116896   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:08.460783   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:08.564184   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:08.612817   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:08.618554   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:08.973947   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:09.063485   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:09.113311   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:09.117452   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:09.461824   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:09.563932   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:09.612803   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:09.617909   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:09.960155   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:10.064175   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:10.112272   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:10.117077   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:10.475099   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:10.564200   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:10.613003   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:10.617544   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:11.105689   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:11.106759   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:11.113400   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:11.117688   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:11.460805   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:11.564396   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:11.621772   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:11.628463   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:11.967418   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:12.068019   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:12.113159   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:12.116683   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:12.461506   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:12.563815   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:12.613943   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:12.623189   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:12.960333   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:13.064504   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:13.112930   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:13.117040   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:13.461097   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:13.564783   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:13.613030   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:13.616302   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:13.960838   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:14.063480   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:14.112604   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:14.116965   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:14.460139   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:14.564436   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:14.612294   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:14.616242   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:14.969203   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:15.064456   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:15.112752   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:15.116436   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:15.461372   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:15.565515   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:15.612872   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:15.619876   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:15.961257   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:16.064235   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:16.114309   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:16.122342   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:16.463358   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:16.564108   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:16.613521   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:16.621408   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:16.961267   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:17.063572   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:17.113263   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:17.117218   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:17.460414   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:17.564239   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:17.612785   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:17.616574   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:17.961799   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:18.064168   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:18.112593   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:18.116905   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:18.462233   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:18.565047   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:18.613328   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:18.616639   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:18.960563   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:19.064764   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:19.113918   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:19.117484   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:19.462805   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:19.563792   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:19.613179   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:19.616655   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:19.960530   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:20.064838   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:20.112922   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:20.115965   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:20.460858   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:20.564005   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:20.612016   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:20.617224   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:20.960918   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:21.063463   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:21.113642   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:21.116391   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:21.470188   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:21.563367   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:21.612230   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:21.615993   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:21.961441   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:22.063619   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:22.112690   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:22.116528   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:22.464982   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:22.564113   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:22.613233   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:22.617478   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:22.961315   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:23.063791   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:23.113133   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:23.116384   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:23.460852   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:23.564349   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:23.612664   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:23.617446   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:23.960818   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:24.063901   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:24.113109   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:24.116016   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:24.461777   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:24.563982   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:24.612846   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:24.616405   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:24.960846   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:25.063989   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:25.112459   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:25.116757   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:25.460767   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:25.564255   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:25.613481   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:25.616735   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:25.960643   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:26.063715   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:26.113492   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:26.116847   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:26.465139   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:26.572419   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:26.613000   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:26.619043   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:26.960145   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:27.064219   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:27.112328   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:27.116363   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:27.461000   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:27.563821   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:27.613039   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:27.616289   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:27.960954   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:28.063872   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:28.112867   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:28.118518   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:28.464289   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:28.563783   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:28.618191   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:28.620262   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:28.962223   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:29.064366   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:29.112423   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:29.121784   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:29.461160   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:29.564059   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:29.612296   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:29.616342   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:29.960479   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:30.064047   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:30.112934   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:30.116169   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:30.460732   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:30.563871   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:30.613255   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:30.616299   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:30.960561   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:31.063533   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:31.113090   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:31.115905   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:31.461959   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:31.563715   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:31.613081   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:31.623278   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:31.960785   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:32.064060   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:32.113168   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:32.115973   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:32.460327   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:32.563985   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:32.612030   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:32.615999   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:32.960394   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:33.064355   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:33.113025   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:33.118000   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:33.462007   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:33.563648   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:33.613089   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:33.616278   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:33.961864   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:34.064158   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:34.112683   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:34.116792   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:34.460707   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:34.563647   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:34.613561   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:34.617843   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:34.960676   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:35.063894   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:35.112910   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:35.116088   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:35.460976   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:36.044457   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:36.045190   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:36.045813   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:36.047959   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:36.064924   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:36.113467   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:36.116346   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:36.460976   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:36.564669   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:36.612811   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:36.618762   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:36.960941   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:37.063292   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:37.112945   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:37.117866   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:37.461026   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:37.567267   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:37.612302   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:37.616558   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:37.965696   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:38.070524   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:38.112977   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:38.117538   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 22:39:38.461740   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:38.591657   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:38.644330   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:38.666781   15752 kapi.go:107] duration metric: took 1m14.05661536s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 22:39:38.961057   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:39.063681   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:39.112906   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:39.463958   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:39.573726   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:39.663849   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:39.962394   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:40.064902   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:40.121327   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:40.462060   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:40.564368   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:40.612323   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:40.967008   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:41.063749   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:41.113698   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:41.460959   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:41.563872   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:41.613567   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:41.976754   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:42.067738   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:42.114950   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:42.502769   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:42.575814   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:42.620606   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:42.961630   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:43.064225   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:43.113007   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:43.461116   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:43.564085   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:43.612630   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:43.961984   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:44.063839   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:44.119996   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:44.463361   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:44.564529   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:44.612837   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:44.961736   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:45.064129   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:45.115738   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:45.461699   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:45.564538   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:45.613828   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:45.961426   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:46.064153   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:46.112351   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:46.463416   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:46.564266   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:46.682555   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:46.962087   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:47.064221   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:47.112330   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:47.461369   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:47.564567   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:47.612736   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:47.963091   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:48.063541   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:48.121802   15752 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 22:39:48.460589   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:48.564707   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:48.613038   15752 kapi.go:107] duration metric: took 1m24.006284715s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 22:39:48.961115   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:49.063867   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:49.466677   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:49.571516   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:49.962583   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:50.064850   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:50.461161   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:50.568250   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:50.961587   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:51.064601   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:51.461754   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:51.563939   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:51.960435   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:52.064248   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:52.460439   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:52.564391   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:52.963380   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:53.064462   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:53.461089   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:53.563620   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:53.960029   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:54.064302   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:54.461162   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:54.564658   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:54.982518   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:55.136224   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:55.464371   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:55.564441   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:55.961182   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 22:39:56.064053   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:56.461841   15752 kapi.go:107] duration metric: took 1m31.007299953s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 22:39:56.563957   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:57.065803   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:57.564095   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:58.064134   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:58.564255   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:59.064255   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:39:59.564918   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:00.063844   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:00.563498   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:01.064679   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:01.563354   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:02.064499   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:02.564638   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:03.064444   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:03.565102   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:04.064045   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:04.565043   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:05.064157   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:05.564361   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:06.064430   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:06.564822   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:07.063387   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:07.565346   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:08.064474   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:08.564721   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:09.063744   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:09.563897   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:10.064609   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:10.563673   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:11.063696   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:11.564421   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:12.064693   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:12.564786   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:13.063648   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:13.567977   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:14.064598   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:14.565621   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:15.065494   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:15.563767   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:16.063787   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:16.563675   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:17.064184   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:17.564389   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:18.063959   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:18.563547   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:19.064588   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:19.564544   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:20.064678   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:20.564563   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:21.064713   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:21.563729   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:22.063980   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:22.563721   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:23.063576   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:23.564629   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:24.065120   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:24.563483   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:25.064113   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:25.563451   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:26.064081   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:26.563883   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:27.063444   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:27.564128   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:28.064368   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:28.564832   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:29.063605   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:29.565019   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:30.063594   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:30.567143   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:31.063987   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:31.564188   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:32.063966   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:32.563698   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:33.063748   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:33.564255   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:34.063567   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:34.567456   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:35.064480   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:35.565934   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:36.063592   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:36.564503   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:37.064544   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:37.564701   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:38.063488   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:38.564078   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:39.063960   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:39.563994   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:40.063540   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:40.564079   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:41.064441   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:41.564356   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:42.067607   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:42.564334   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:43.066341   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:43.564106   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:44.064056   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:44.565045   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:45.064951   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:45.564336   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:46.063538   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:46.564717   15752 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 22:40:47.064806   15752 kapi.go:107] duration metric: took 2m19.504957275s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 22:40:47.066847   15752 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-033244 cluster.
	I0116 22:40:47.068349   15752 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 22:40:47.070062   15752 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 22:40:47.071843   15752 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, helm-tiller, yakd, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0116 22:40:47.073347   15752 addons.go:505] enable addons completed in 2m31.884066792s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget storage-provisioner helm-tiller yakd metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0116 22:40:47.073387   15752 start.go:233] waiting for cluster config update ...
	I0116 22:40:47.073410   15752 start.go:242] writing updated cluster config ...
	I0116 22:40:47.073668   15752 ssh_runner.go:195] Run: rm -f paused
	I0116 22:40:47.124058   15752 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 22:40:47.126000   15752 out.go:177] * Done! kubectl is now configured to use "addons-033244" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 22:37:30 UTC, ends at Tue 2024-01-16 22:43:59 UTC. --
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.702461703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9509437b-b3fc-4344-adf1-8d5e8685b4ac name=/runtime.v1.RuntimeService/Version
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.703954625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a599a97a-b213-4863-864d-14c250eeae31 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.705220155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445039705202725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=a599a97a-b213-4863-864d-14c250eeae31 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.706242911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c775651-ae2e-4e59-812c-bce35b45f258 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.706295187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c775651-ae2e-4e59-812c-bce35b45f258 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.706719722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7941f8117418b93b57487f9537b01125d766c2479dff6c80da3a72ae8a7c62f7,PodSandboxId:de7bc6bb4a1da504db4539474b20ad351cbcf363e9faca4a7fbe330b968ce584,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445032178238296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bf89p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0cd58f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3fb2e8ea60f87512c7306e23e7e43c9b46c12840834e5b3e37bc5d04e30ff,PodSandboxId:5e571a6d2cfa3d6bb1cbbb063e56cd4be5b084ab9182490fcd050ab3a4e82e28,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705444892612910396,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,},Annotations:map[string]string{io.kubernet
es.container.hash: 168a957e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7cf26f5547368907409cd2dbb0fb7f3e2c88f90f53b563b510c9df825d732a5,PodSandboxId:5a1be7017f5785ea692aa89310b66a28223afca5f5887b3f8eecbbd8737723fe,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705444880705927408,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-r2dlj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 68006346-d91a-4daf-bd72-77f14555bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 4575ea97,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d02b5d6222942b70cd02536f575dba4a013544f97ff1aba60dda0f364964554,PodSandboxId:350ef5f3791aeaec5168b096ce7dbb6aa7f5eb064b94e101a647dadf678b22ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705444845624992944,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-d7j45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c397e3b2-3430-4557-8be7-30cc078a5e72,},Annotations:map[string]string{io.kubernetes.container.hash: 1a217e84,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f76f8dacf31e7593a6bef133cf8aff8a16f71b1a54d51a718afe9588ac67d,PodSandboxId:b97c5ce5032e037dd242221fe2c2ee449ab14c60735aa6ee5abbda01b216cfe9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444771797597032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77b9x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 466d023b-86c9-43a4-ab40-e9eac3a10b17,},Annotations:map[string]string{io.kubernetes.container.hash: 732aab84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997d1f73517a66649e458a2bd53d5439eb1ea3a34d705f162767afc001bfdb19,PodSandboxId:bc7ba0ff7fb98aabac081193c386719312e74f4e11cc9886e32f8ed095db0e6e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705444771633874705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-2grxm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6de3bf59-000a-422f-9853-f79dc4e77323,},Annotations:map[string]string{io.kubernetes.container.hash: 7225b98b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e829df6a9326ab7620482efe54e2a4b433fe2c6b2d2ebd8219c9aa61d0724075,PodSandboxId:4970affe4c886be7013f9adb15b975e634b5a111496a1e629e7aee88bb0e14ec,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444748942356960,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wlnqf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c278637-fa95-4701-894a-78833f5db5aa,},Annotations:map[string]string{io.kubernetes.container.hash: 30758f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a263c9ff9d57e7735c583161d80e22d326d5f695c9fef0c798dcc1750c3e75,PodSandboxId:579e5fd988e1406dd3f90c48b74fa80246cd010d3d60b0de4202c485fd75a276,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63
e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705444717542760125,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sdmtr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: ec6dcf21-656d-4cc6-a676-1966a5ebb1f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7b53f0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac684c8d49882265ac05a14bcabb28beb9b3aa75ce7867d94bf1ecd8fded1da0,PodSandboxId:6d0552a5c68125e034b7887d3d52ae87003fe53b49495d86ab6550e8c94b847c,Metadata:&ContainerMetadata{Name:storage-provisioner,
Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705444715183575763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e7e896-ac59-44b4-a9c4-5606326f8d86,},Annotations:map[string]string{io.kubernetes.container.hash: ede2bd4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc2f381283f8f1512f3b2f553133a933c5bdb343ab247d7d2c7abfff18f47fa,PodSandboxId:0b79e0381d5831dbd574d3dc10199742706c563fb2a33d80fd69ea03f61aa806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705444710855544492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blz7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a02cd2-8d73-4efd-a0aa-65632646bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9752ead5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa65f48fcbc8865292e162229d883931aef1a38e19d468be115e82875ec8a2b,PodSandboxId:fd0619da10bfa0db2b09593fd6fa201254099b3b902d0966477dcabb96a32205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700419255559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dw95f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9d7112-e3af-448f-9c2e-381418ae9772,},Annotations:map[string]string{io.kubernetes.container.hash: 727cf5c5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6dcd5a0d72534f2f4d28c29809cb8c463
fea6490ab022847de89afda557a26e,PodSandboxId:187f4da818b626bac253e6e9f07def2a3b67ed0dda49efb4fc1c82508939a757,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700525348283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rc48z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2abb13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e64d151ec803f8393e4f80358e91d42401aa140351017c6b283098ed0d0404,PodSandboxId:67832f8cb202bcb3dcdd44c0e4c350a61d81015c02480f32e39ce1123ce7a4b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705444674784575975,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cca846aec94fa8d0877340f001c018,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539fb6f69b419ec8ed74b39468d5fea1b243a93e24fed4d17fd62fb6501422d6,PodSandboxId:af916381a2eb834c4cff8255b84d6d7bbb1b7e8a09f0321126bda379e0e97d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705444674518846965,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f9c947320f44ec6a3748142bf708e4,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d906df4b6a5299397fa79013ae1f154639c7c3a9d3c36ead6547848d7c1ebfa6,PodSandboxId:493e71a53a5377394065adda546e4b2485f9fc1fe308913d0379e0ba53009332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705444674435636024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751411d8ca0d323c6088b1adea575403,},Annotations:map[string]string{io.kubernetes.container.hash: dbc182e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a776c6c37d4db37443dcd1ae504caded32d63813eb6541e5805d0a1ceca319f,PodSandboxId:ea4e2bced6ebe8240dd49e1ea5f5bbba073e2f59370137937ab843374f859278,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705444674344732443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be8bfcc62234f8b2d621a2c74061dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c775651-ae2e-4e59-812c-bce35b45f258 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.739963111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=219cf45b-9cea-4624-b7f8-86bd9023a562 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.740017475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=219cf45b-9cea-4624-b7f8-86bd9023a562 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.741212007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e8021cee-fffe-465e-a901-64e32fb5ad9e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.742437866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445039742419178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=e8021cee-fffe-465e-a901-64e32fb5ad9e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.742982843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33f2023e-04b6-486d-bdba-5e9a3483ef6a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.743036125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33f2023e-04b6-486d-bdba-5e9a3483ef6a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.743484184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7941f8117418b93b57487f9537b01125d766c2479dff6c80da3a72ae8a7c62f7,PodSandboxId:de7bc6bb4a1da504db4539474b20ad351cbcf363e9faca4a7fbe330b968ce584,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445032178238296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bf89p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0cd58f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3fb2e8ea60f87512c7306e23e7e43c9b46c12840834e5b3e37bc5d04e30ff,PodSandboxId:5e571a6d2cfa3d6bb1cbbb063e56cd4be5b084ab9182490fcd050ab3a4e82e28,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705444892612910396,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,},Annotations:map[string]string{io.kubernet
es.container.hash: 168a957e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7cf26f5547368907409cd2dbb0fb7f3e2c88f90f53b563b510c9df825d732a5,PodSandboxId:5a1be7017f5785ea692aa89310b66a28223afca5f5887b3f8eecbbd8737723fe,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705444880705927408,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-r2dlj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 68006346-d91a-4daf-bd72-77f14555bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 4575ea97,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d02b5d6222942b70cd02536f575dba4a013544f97ff1aba60dda0f364964554,PodSandboxId:350ef5f3791aeaec5168b096ce7dbb6aa7f5eb064b94e101a647dadf678b22ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705444845624992944,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-d7j45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c397e3b2-3430-4557-8be7-30cc078a5e72,},Annotations:map[string]string{io.kubernetes.container.hash: 1a217e84,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f76f8dacf31e7593a6bef133cf8aff8a16f71b1a54d51a718afe9588ac67d,PodSandboxId:b97c5ce5032e037dd242221fe2c2ee449ab14c60735aa6ee5abbda01b216cfe9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444771797597032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77b9x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 466d023b-86c9-43a4-ab40-e9eac3a10b17,},Annotations:map[string]string{io.kubernetes.container.hash: 732aab84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997d1f73517a66649e458a2bd53d5439eb1ea3a34d705f162767afc001bfdb19,PodSandboxId:bc7ba0ff7fb98aabac081193c386719312e74f4e11cc9886e32f8ed095db0e6e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705444771633874705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-2grxm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6de3bf59-000a-422f-9853-f79dc4e77323,},Annotations:map[string]string{io.kubernetes.container.hash: 7225b98b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e829df6a9326ab7620482efe54e2a4b433fe2c6b2d2ebd8219c9aa61d0724075,PodSandboxId:4970affe4c886be7013f9adb15b975e634b5a111496a1e629e7aee88bb0e14ec,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444748942356960,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wlnqf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c278637-fa95-4701-894a-78833f5db5aa,},Annotations:map[string]string{io.kubernetes.container.hash: 30758f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a263c9ff9d57e7735c583161d80e22d326d5f695c9fef0c798dcc1750c3e75,PodSandboxId:579e5fd988e1406dd3f90c48b74fa80246cd010d3d60b0de4202c485fd75a276,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63
e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705444717542760125,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sdmtr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: ec6dcf21-656d-4cc6-a676-1966a5ebb1f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7b53f0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac684c8d49882265ac05a14bcabb28beb9b3aa75ce7867d94bf1ecd8fded1da0,PodSandboxId:6d0552a5c68125e034b7887d3d52ae87003fe53b49495d86ab6550e8c94b847c,Metadata:&ContainerMetadata{Name:storage-provisioner,
Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705444715183575763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e7e896-ac59-44b4-a9c4-5606326f8d86,},Annotations:map[string]string{io.kubernetes.container.hash: ede2bd4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc2f381283f8f1512f3b2f553133a933c5bdb343ab247d7d2c7abfff18f47fa,PodSandboxId:0b79e0381d5831dbd574d3dc10199742706c563fb2a33d80fd69ea03f61aa806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705444710855544492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blz7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a02cd2-8d73-4efd-a0aa-65632646bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9752ead5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa65f48fcbc8865292e162229d883931aef1a38e19d468be115e82875ec8a2b,PodSandboxId:fd0619da10bfa0db2b09593fd6fa201254099b3b902d0966477dcabb96a32205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700419255559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dw95f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9d7112-e3af-448f-9c2e-381418ae9772,},Annotations:map[string]string{io.kubernetes.container.hash: 727cf5c5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6dcd5a0d72534f2f4d28c29809cb8c463
fea6490ab022847de89afda557a26e,PodSandboxId:187f4da818b626bac253e6e9f07def2a3b67ed0dda49efb4fc1c82508939a757,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700525348283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rc48z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2abb13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e64d151ec803f8393e4f80358e91d42401aa140351017c6b283098ed0d0404,PodSandboxId:67832f8cb202bcb3dcdd44c0e4c350a61d81015c02480f32e39ce1123ce7a4b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705444674784575975,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cca846aec94fa8d0877340f001c018,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539fb6f69b419ec8ed74b39468d5fea1b243a93e24fed4d17fd62fb6501422d6,PodSandboxId:af916381a2eb834c4cff8255b84d6d7bbb1b7e8a09f0321126bda379e0e97d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705444674518846965,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f9c947320f44ec6a3748142bf708e4,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d906df4b6a5299397fa79013ae1f154639c7c3a9d3c36ead6547848d7c1ebfa6,PodSandboxId:493e71a53a5377394065adda546e4b2485f9fc1fe308913d0379e0ba53009332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705444674435636024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751411d8ca0d323c6088b1adea575403,},Annotations:map[string]string{io.kubernetes.container.hash: dbc182e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a776c6c37d4db37443dcd1ae504caded32d63813eb6541e5805d0a1ceca319f,PodSandboxId:ea4e2bced6ebe8240dd49e1ea5f5bbba073e2f59370137937ab843374f859278,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705444674344732443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be8bfcc62234f8b2d621a2c74061dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33f2023e-04b6-486d-bdba-5e9a3483ef6a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.784587092Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9f14d01a-eee2-4aad-a385-032be1381aa3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.784914967Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:de7bc6bb4a1da504db4539474b20ad351cbcf363e9faca4a7fbe330b968ce584,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-bf89p,Uid:4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705445029237560866,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-bf89p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:43:48.899116937Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e571a6d2cfa3d6bb1cbbb063e56cd4be5b084ab9182490fcd050ab3a4e82e28,Metadata:&PodSandboxMetadata{Name:nginx,Uid:161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1705444887077045931,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:41:26.747418686Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a1be7017f5785ea692aa89310b66a28223afca5f5887b3f8eecbbd8737723fe,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-r2dlj,Uid:68006346-d91a-4daf-bd72-77f14555bdd0,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444873572613844,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-r2dlj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 68006346-d91a-4daf-bd72-77f14555bdd0,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-16T22:41:13.236498329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:350ef5f3791aeaec5168b096ce7dbb6aa7f5eb064b94e101a647dadf678b22ca,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-d7j45,Uid:c397e3b2-3430-4557-8be7-30cc078a5e72,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444841828451331,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-d7j45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c397e3b2-3430-4557-8be7-30cc078a5e72,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:27.382498834Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc7ba0ff7fb98aabac081193c386719312e74f4e11cc9886e32f8ed095db0e6e,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-78b46b4d5c-2grxm,Uid:6de3bf59-000a-422f-9853-f79dc4e77323,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1705444704159582870,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-2grxm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6de3bf59-000a-422f-9853-f79dc4e77323,pod-template-hash: 78b46b4d5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:22.869514065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d0552a5c68125e034b7887d3d52ae87003fe53b49495d86ab6550e8c94b847c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:57e7e896-ac59-44b4-a9c4-5606326f8d86,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444703872481642,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e7e896-ac59-44b4-a9c4-5606326f8d86,},Annotations:
map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T22:38:23.212530803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:579e5fd988e1406dd3f90c48b74fa80246cd010d3d60b0de4202c485fd75a276,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-sdmtr,Uid:ec6dcf21-656d-4cc6-a676-1966a5ebb1
f5,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444703396067927,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sdmtr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: ec6dcf21-656d-4cc6-a676-1966a5ebb1f5,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:23.051764566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:187f4da818b626bac253e6e9f07def2a3b67ed0dda49efb4fc1c82508939a757,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-rc48z,Uid:a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444696265911445,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-rc48z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:15.633931954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd0619da10bfa0db2b09593fd6fa201254099b3b902d0966477dcabb96a32205,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dw95f,Uid:5e9d7112-e3af-448f-9c2e-381418ae9772,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444695835559816,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dw95f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9d7112-e3af-448f-9c2e-381418ae9772,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:15.505842317Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b79e0381d5831dbd574d3dc10199742706c563fb2a33d80fd69ea03f61aa806,Metadata:&PodSandboxMetadata{Name:kube-proxy-blz7c,
Uid:62a02cd2-8d73-4efd-a0aa-65632646bc9b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444695607661803,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-blz7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a02cd2-8d73-4efd-a0aa-65632646bc9b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T22:38:14.676584409Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af916381a2eb834c4cff8255b84d6d7bbb1b7e8a09f0321126bda379e0e97d3a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-033244,Uid:34f9c947320f44ec6a3748142bf708e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444673881118407,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f9c
947320f44ec6a3748142bf708e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.234:8443,kubernetes.io/config.hash: 34f9c947320f44ec6a3748142bf708e4,kubernetes.io/config.seen: 2024-01-16T22:37:52.735046477Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:493e71a53a5377394065adda546e4b2485f9fc1fe308913d0379e0ba53009332,Metadata:&PodSandboxMetadata{Name:etcd-addons-033244,Uid:751411d8ca0d323c6088b1adea575403,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444673875042280,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751411d8ca0d323c6088b1adea575403,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.234:2379,kubernetes.io/config.hash: 751411d8ca0d323c6088b1adea575403,kubernetes.io/config.seen: 2
024-01-16T22:37:52.735045349Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:67832f8cb202bcb3dcdd44c0e4c350a61d81015c02480f32e39ce1123ce7a4b4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-033244,Uid:66cca846aec94fa8d0877340f001c018,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705444673841477835,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cca846aec94fa8d0877340f001c018,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 66cca846aec94fa8d0877340f001c018,kubernetes.io/config.seen: 2024-01-16T22:37:52.735041583Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea4e2bced6ebe8240dd49e1ea5f5bbba073e2f59370137937ab843374f859278,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-033244,Uid:1be8bfcc62234f8b2d621a2c74061dc5,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1705444673837050808,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be8bfcc62234f8b2d621a2c74061dc5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1be8bfcc62234f8b2d621a2c74061dc5,kubernetes.io/config.seen: 2024-01-16T22:37:52.735047613Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9f14d01a-eee2-4aad-a385-032be1381aa3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.785797622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18e62575-93bf-4cac-b75c-62d54ad1798d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.785855698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18e62575-93bf-4cac-b75c-62d54ad1798d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.786693087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7941f8117418b93b57487f9537b01125d766c2479dff6c80da3a72ae8a7c62f7,PodSandboxId:de7bc6bb4a1da504db4539474b20ad351cbcf363e9faca4a7fbe330b968ce584,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445032178238296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bf89p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0cd58f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3fb2e8ea60f87512c7306e23e7e43c9b46c12840834e5b3e37bc5d04e30ff,PodSandboxId:5e571a6d2cfa3d6bb1cbbb063e56cd4be5b084ab9182490fcd050ab3a4e82e28,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705444892612910396,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,},Annotations:map[string]string{io.kubernet
es.container.hash: 168a957e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7cf26f5547368907409cd2dbb0fb7f3e2c88f90f53b563b510c9df825d732a5,PodSandboxId:5a1be7017f5785ea692aa89310b66a28223afca5f5887b3f8eecbbd8737723fe,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705444880705927408,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-r2dlj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 68006346-d91a-4daf-bd72-77f14555bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 4575ea97,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d02b5d6222942b70cd02536f575dba4a013544f97ff1aba60dda0f364964554,PodSandboxId:350ef5f3791aeaec5168b096ce7dbb6aa7f5eb064b94e101a647dadf678b22ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705444845624992944,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-d7j45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c397e3b2-3430-4557-8be7-30cc078a5e72,},Annotations:map[string]string{io.kubernetes.container.hash: 1a217e84,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997d1f73517a66649e458a2bd53d5439eb1ea3a34d705f162767afc001bfdb19,PodSandboxId:bc7ba0ff7fb98aabac081193c386719312e74f4e11cc9886e32f8ed095db0e6e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705444771633874705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-2grxm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6de3bf59-000a-422f-9853-f79dc4e77323,},Annotations:map[string]string{io.kubernetes.container.hash: 7225b98b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a263c9ff9d57e7735c583161d80e22d326d5f695c9fef0c798dcc1750c3e75,PodSandboxId:579e5fd988e1406dd3f90c48b74fa80246cd010d3d60b0de4202c485fd75a276,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e1560
5311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705444717542760125,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sdmtr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: ec6dcf21-656d-4cc6-a676-1966a5ebb1f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7b53f0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac684c8d49882265ac05a14bcabb28beb9b3aa75ce7867d94bf1ecd8fded1da0,PodSandboxId:6d0552a5c68125e034b7887d3d52ae87003fe53b49495d86ab6550e8c94b847c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[s
tring]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705444715183575763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e7e896-ac59-44b4-a9c4-5606326f8d86,},Annotations:map[string]string{io.kubernetes.container.hash: ede2bd4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc2f381283f8f1512f3b2f553133a933c5bdb343ab247d7d2c7abfff18f47fa,PodSandboxId:0b79e0381d5831dbd574d3dc10199742706c563fb2a33d80fd69ea03f61aa806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]stri
ng{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705444710855544492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blz7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a02cd2-8d73-4efd-a0aa-65632646bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9752ead5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa65f48fcbc8865292e162229d883931aef1a38e19d468be115e82875ec8a2b,PodSandboxId:fd0619da10bfa0db2b09593fd6fa201254099b3b902d0966477dcabb96a32205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredn
s/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700419255559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dw95f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9d7112-e3af-448f-9c2e-381418ae9772,},Annotations:map[string]string{io.kubernetes.container.hash: 727cf5c5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6dcd5a0d72534f2f4d28c29809cb8c463fea6490ab022847de89afda557a26e,PodSandboxId:187f4da818b626bac253e6e9f07def2a3b67ed0dda49efb4fc1c82508939a757,Metadata
:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700525348283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rc48z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2abb13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:a3e64d151ec803f8393e4f80358e91d42401aa140351017c6b283098ed0d0404,PodSandboxId:67832f8cb202bcb3dcdd44c0e4c350a61d81015c02480f32e39ce1123ce7a4b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705444674784575975,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cca846aec94fa8d0877340f001c018,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:539fb6f69b419ec8ed74b39468d5fea1b243a93e24fed4d17fd62fb6501422d6,PodSandboxId:af916381a2eb834c4cff8255b84d6d7bbb1b7e8a09f0321126bda379e0e97d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705444674518846965,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f9c947320f44ec6a3748142bf708e4,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:d906df4b6a5299397fa79013ae1f154639c7c3a9d3c36ead6547848d7c1ebfa6,PodSandboxId:493e71a53a5377394065adda546e4b2485f9fc1fe308913d0379e0ba53009332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705444674435636024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751411d8ca0d323c6088b1adea575403,},Annotations:map[string]string{io.kubernetes.container.hash: dbc182e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a776c6c37d4db37443dcd1ae504caded32d63
813eb6541e5805d0a1ceca319f,PodSandboxId:ea4e2bced6ebe8240dd49e1ea5f5bbba073e2f59370137937ab843374f859278,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705444674344732443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be8bfcc62234f8b2d621a2c74061dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/c
hain.go:25" id=18e62575-93bf-4cac-b75c-62d54ad1798d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.787868482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2ad6fdaa-2192-433f-8d4f-98e4c26f29d4 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.787909876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2ad6fdaa-2192-433f-8d4f-98e4c26f29d4 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.789312696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a5b59cb1-cbed-4629-a25e-3626c4fb05b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.790530394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445039790512673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575394,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=a5b59cb1-cbed-4629-a25e-3626c4fb05b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.791330193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=202a2823-ac44-44ef-9356-141b2ebaf253 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.791434981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=202a2823-ac44-44ef-9356-141b2ebaf253 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:43:59 addons-033244 crio[721]: time="2024-01-16 22:43:59.791846316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7941f8117418b93b57487f9537b01125d766c2479dff6c80da3a72ae8a7c62f7,PodSandboxId:de7bc6bb4a1da504db4539474b20ad351cbcf363e9faca4a7fbe330b968ce584,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445032178238296,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bf89p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c56cf2d-2d36-4e94-9aa4-0eda074c6d30,},Annotations:map[string]string{io.kubernetes.container.hash: 5f0cd58f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3fb2e8ea60f87512c7306e23e7e43c9b46c12840834e5b3e37bc5d04e30ff,PodSandboxId:5e571a6d2cfa3d6bb1cbbb063e56cd4be5b084ab9182490fcd050ab3a4e82e28,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705444892612910396,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 161da6d6-00f5-4bed-85ce-e0fe7e9ef47e,},Annotations:map[string]string{io.kubernet
es.container.hash: 168a957e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7cf26f5547368907409cd2dbb0fb7f3e2c88f90f53b563b510c9df825d732a5,PodSandboxId:5a1be7017f5785ea692aa89310b66a28223afca5f5887b3f8eecbbd8737723fe,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705444880705927408,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-r2dlj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 68006346-d91a-4daf-bd72-77f14555bdd0,},Annotations:map[string]string{io.kubernetes.container.hash: 4575ea97,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d02b5d6222942b70cd02536f575dba4a013544f97ff1aba60dda0f364964554,PodSandboxId:350ef5f3791aeaec5168b096ce7dbb6aa7f5eb064b94e101a647dadf678b22ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705444845624992944,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-d7j45,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c397e3b2-3430-4557-8be7-30cc078a5e72,},Annotations:map[string]string{io.kubernetes.container.hash: 1a217e84,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f76f8dacf31e7593a6bef133cf8aff8a16f71b1a54d51a718afe9588ac67d,PodSandboxId:b97c5ce5032e037dd242221fe2c2ee449ab14c60735aa6ee5abbda01b216cfe9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444771797597032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77b9x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 466d023b-86c9-43a4-ab40-e9eac3a10b17,},Annotations:map[string]string{io.kubernetes.container.hash: 732aab84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997d1f73517a66649e458a2bd53d5439eb1ea3a34d705f162767afc001bfdb19,PodSandboxId:bc7ba0ff7fb98aabac081193c386719312e74f4e11cc9886e32f8ed095db0e6e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@s
ha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1705444771633874705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-2grxm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6de3bf59-000a-422f-9853-f79dc4e77323,},Annotations:map[string]string{io.kubernetes.container.hash: 7225b98b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e829df6a9326ab7620482efe54e2a4b433fe2c6b2d2ebd8219c9aa61d0724075,PodSandboxId:4970affe4c886be7013f9adb15b975e634b5a111496a1e629e7aee88bb0e14ec,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotation
s:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705444748942356960,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wlnqf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c278637-fa95-4701-894a-78833f5db5aa,},Annotations:map[string]string{io.kubernetes.container.hash: 30758f34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a263c9ff9d57e7735c583161d80e22d326d5f695c9fef0c798dcc1750c3e75,PodSandboxId:579e5fd988e1406dd3f90c48b74fa80246cd010d3d60b0de4202c485fd75a276,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63
e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705444717542760125,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-sdmtr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: ec6dcf21-656d-4cc6-a676-1966a5ebb1f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7b53f0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac684c8d49882265ac05a14bcabb28beb9b3aa75ce7867d94bf1ecd8fded1da0,PodSandboxId:6d0552a5c68125e034b7887d3d52ae87003fe53b49495d86ab6550e8c94b847c,Metadata:&ContainerMetadata{Name:storage-provisioner,
Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705444715183575763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e7e896-ac59-44b4-a9c4-5606326f8d86,},Annotations:map[string]string{io.kubernetes.container.hash: ede2bd4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc2f381283f8f1512f3b2f553133a933c5bdb343ab247d7d2c7abfff18f47fa,PodSandboxId:0b79e0381d5831dbd574d3dc10199742706c563fb2a33d80fd69ea03f61aa806,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705444710855544492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blz7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a02cd2-8d73-4efd-a0aa-65632646bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9752ead5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa65f48fcbc8865292e162229d883931aef1a38e19d468be115e82875ec8a2b,PodSandboxId:fd0619da10bfa0db2b09593fd6fa201254099b3b902d0966477dcabb96a32205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89
fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700419255559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dw95f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9d7112-e3af-448f-9c2e-381418ae9772,},Annotations:map[string]string{io.kubernetes.container.hash: 727cf5c5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6dcd5a0d72534f2f4d28c29809cb8c463
fea6490ab022847de89afda557a26e,PodSandboxId:187f4da818b626bac253e6e9f07def2a3b67ed0dda49efb4fc1c82508939a757,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705444700525348283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rc48z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a381f197-a8f5-4bd1-a31a-dcb532ccfbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2abb13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3e64d151ec803f8393e4f80358e91d42401aa140351017c6b283098ed0d0404,PodSandboxId:67832f8cb202bcb3dcdd44c0e4c350a61d81015c02480f32e39ce1123ce7a4b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705444674784575975,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cca846aec94fa8d0877340f001c018,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539fb6f69b419ec8ed74b39468d5fea1b243a93e24fed4d17fd62fb6501422d6,PodSandboxId:af916381a2eb834c4cff8255b84d6d7bbb1b7e8a09f0321126bda379e0e97d3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705444674518846965,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34f9c947320f44ec6a3748142bf708e4,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d906df4b6a5299397fa79013ae1f154639c7c3a9d3c36ead6547848d7c1ebfa6,PodSandboxId:493e71a53a5377394065adda546e4b2485f9fc1fe308913d0379e0ba53009332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705444674435636024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751411d8ca0d323c6088b1adea575403,},Annotations:map[string]string{io.kubernetes.container.hash: dbc182e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a776c6c37d4db37443dcd1ae504caded32d63813eb6541e5805d0a1ceca319f,PodSandboxId:ea4e2bced6ebe8240dd49e1ea5f5bbba073e2f59370137937ab843374f859278,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705444674344732443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-033244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be8bfcc62234f8b2d621a2c74061dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=202a2823-ac44-44ef-9356-141b2ebaf253 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7941f8117418b       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   de7bc6bb4a1da       hello-world-app-5d77478584-bf89p
	1cf3fb2e8ea60       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   5e571a6d2cfa3       nginx
	a7cf26f554736       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   5a1be7017f578       headlamp-7ddfbb94ff-r2dlj
	6d02b5d622294       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   350ef5f3791ae       gcp-auth-d4c87556c-d7j45
	f77f76f8dacf3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              patch                     0                   b97c5ce5032e0       ingress-nginx-admission-patch-77b9x
	997d1f73517a6       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   bc7ba0ff7fb98       local-path-provisioner-78b46b4d5c-2grxm
	e829df6a9326a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   4970affe4c886       ingress-nginx-admission-create-wlnqf
	53a263c9ff9d5       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   579e5fd988e14       yakd-dashboard-9947fc6bf-sdmtr
	ac684c8d49882       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   6d0552a5c6812       storage-provisioner
	9cc2f381283f8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   0b79e0381d583       kube-proxy-blz7c
	f6dcd5a0d7253       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   187f4da818b62       coredns-5dd5756b68-rc48z
	4aa65f48fcbc8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   fd0619da10bfa       coredns-5dd5756b68-dw95f
	a3e64d151ec80       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             6 minutes ago       Running             kube-scheduler            0                   67832f8cb202b       kube-scheduler-addons-033244
	539fb6f69b419       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             6 minutes ago       Running             kube-apiserver            0                   af916381a2eb8       kube-apiserver-addons-033244
	d906df4b6a529       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             6 minutes ago       Running             etcd                      0                   493e71a53a537       etcd-addons-033244
	9a776c6c37d4d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             6 minutes ago       Running             kube-controller-manager   0                   ea4e2bced6ebe       kube-controller-manager-addons-033244
	
	
	==> coredns [4aa65f48fcbc8865292e162229d883931aef1a38e19d468be115e82875ec8a2b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49835 - 45683 "HINFO IN 5027332554567206256.4696347328785384461. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009884454s
	[INFO] 10.244.0.8:47434 - 53262 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000222397s
	[INFO] 10.244.0.8:47434 - 52748 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073795s
	[INFO] 10.244.0.8:43922 - 54374 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.002018073s
	[INFO] 10.244.0.8:43922 - 49508 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147885s
	[INFO] 10.244.0.22:47090 - 59827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000377049s
	[INFO] 10.244.0.22:45749 - 52063 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139491s
	[INFO] 10.244.0.22:37945 - 4458 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109197s
	[INFO] 10.244.0.22:42126 - 12497 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124002s
	[INFO] 10.244.0.22:35345 - 16619 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000925539s
	[INFO] 10.244.0.25:54849 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000411797s
	
	
	==> coredns [f6dcd5a0d72534f2f4d28c29809cb8c463fea6490ab022847de89afda557a26e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59751 - 2098 "HINFO IN 3660925721792213534.3311471453217219066. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010527763s
	[INFO] 10.244.0.8:54030 - 16584 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000528933s
	[INFO] 10.244.0.8:54030 - 25804 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087221s
	[INFO] 10.244.0.8:34150 - 44951 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074444s
	[INFO] 10.244.0.8:34150 - 54921 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074269s
	[INFO] 10.244.0.8:44757 - 10887 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117857s
	[INFO] 10.244.0.8:44757 - 62342 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088891s
	[INFO] 10.244.0.8:43336 - 28811 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152659s
	[INFO] 10.244.0.8:43336 - 50566 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090089s
	[INFO] 10.244.0.8:50358 - 27554 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000945s
	[INFO] 10.244.0.8:50358 - 59809 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082969s
	[INFO] 10.244.0.8:47621 - 22066 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050387s
	[INFO] 10.244.0.8:47621 - 3380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000032799s
	[INFO] 10.244.0.22:47938 - 61469 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000437586s
	[INFO] 10.244.0.22:48439 - 53562 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096337s
	[INFO] 10.244.0.22:60677 - 34801 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000621629s
	[INFO] 10.244.0.25:35791 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000244179s
	
	
	==> describe nodes <==
	Name:               addons-033244
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-033244
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=addons-033244
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T22_38_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-033244
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 22:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-033244
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 22:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 22:42:07 +0000   Tue, 16 Jan 2024 22:37:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 22:42:07 +0000   Tue, 16 Jan 2024 22:37:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 22:42:07 +0000   Tue, 16 Jan 2024 22:37:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 22:42:07 +0000   Tue, 16 Jan 2024 22:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    addons-033244
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb3cfe131d1a49f0aa2a7277c5cb6485
	  System UUID:                bb3cfe13-1d1a-49f0-aa2a-7277c5cb6485
	  Boot ID:                    0325d15e-5e80-4286-8917-8bc267b756c5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-bf89p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-d7j45                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  headlamp                    headlamp-7ddfbb94ff-r2dlj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-5dd5756b68-dw95f                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m45s
	  kube-system                 coredns-5dd5756b68-rc48z                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m45s
	  kube-system                 etcd-addons-033244                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-apiserver-addons-033244               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-controller-manager-addons-033244      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-proxy-blz7c                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-scheduler-addons-033244               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  local-path-storage          local-path-provisioner-78b46b4d5c-2grxm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-sdmtr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             368Mi (9%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m26s                kube-proxy       
	  Normal  Starting                 6m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m8s)  kubelet          Node addons-033244 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m8s)  kubelet          Node addons-033244 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m8s)  kubelet          Node addons-033244 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s                kubelet          Node addons-033244 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s                kubelet          Node addons-033244 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s                kubelet          Node addons-033244 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m58s                kubelet          Node addons-033244 status is now: NodeReady
	  Normal  RegisteredNode           5m46s                node-controller  Node addons-033244 event: Registered Node addons-033244 in Controller
	
	
	==> dmesg <==
	[  +0.117781] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.945711] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.906885] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.107479] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.128227] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.110863] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.208652] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[  +8.937620] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[Jan16 22:38] systemd-fstab-generator[1247]: Ignoring "noauto" for root device
	[ +26.398026] kauditd_printk_skb: 69 callbacks suppressed
	[ +25.534142] kauditd_printk_skb: 16 callbacks suppressed
	[Jan16 22:39] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.068220] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.835094] kauditd_printk_skb: 26 callbacks suppressed
	[Jan16 22:40] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.815208] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.007529] kauditd_printk_skb: 3 callbacks suppressed
	[Jan16 22:41] kauditd_printk_skb: 17 callbacks suppressed
	[ +15.618138] kauditd_printk_skb: 31 callbacks suppressed
	[ +10.898537] kauditd_printk_skb: 12 callbacks suppressed
	[ +18.323256] kauditd_printk_skb: 12 callbacks suppressed
	[Jan16 22:43] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [d906df4b6a5299397fa79013ae1f154639c7c3a9d3c36ead6547848d7c1ebfa6] <==
	{"level":"warn","ts":"2024-01-16T22:39:11.094944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.207088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:86407"}
	{"level":"info","ts":"2024-01-16T22:39:11.09508Z","caller":"traceutil/trace.go:171","msg":"trace[1623863282] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:986; }","duration":"141.359132ms","start":"2024-01-16T22:39:10.953709Z","end":"2024-01-16T22:39:11.095068Z","steps":["trace[1623863282] 'agreement among raft nodes before linearized reading'  (duration: 141.026276ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T22:39:36.033435Z","caller":"traceutil/trace.go:171","msg":"trace[40216432] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1124; }","duration":"475.40614ms","start":"2024-01-16T22:39:35.558016Z","end":"2024-01-16T22:39:36.033422Z","steps":["trace[40216432] 'read index received'  (duration: 475.341499ms)","trace[40216432] 'applied index is now lower than readState.Index'  (duration: 63.766µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T22:39:36.033613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"427.78949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-01-16T22:39:36.033669Z","caller":"traceutil/trace.go:171","msg":"trace[1741739614] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1087; }","duration":"427.86441ms","start":"2024-01-16T22:39:35.605797Z","end":"2024-01-16T22:39:36.033662Z","steps":["trace[1741739614] 'agreement among raft nodes before linearized reading'  (duration: 427.751052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:39:36.033714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:39:35.605783Z","time spent":"427.922846ms","remote":"127.0.0.1:38166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13887,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-01-16T22:39:36.03381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"475.80627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-16T22:39:36.033864Z","caller":"traceutil/trace.go:171","msg":"trace[939657312] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1087; }","duration":"475.868625ms","start":"2024-01-16T22:39:35.557986Z","end":"2024-01-16T22:39:36.033855Z","steps":["trace[939657312] 'agreement among raft nodes before linearized reading'  (duration: 475.496709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:39:36.033904Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:39:35.55797Z","time spent":"475.927962ms","remote":"127.0.0.1:38166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10598,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-01-16T22:39:36.03428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"424.680657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:86842"}
	{"level":"info","ts":"2024-01-16T22:39:36.034341Z","caller":"traceutil/trace.go:171","msg":"trace[287718364] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1087; }","duration":"424.746471ms","start":"2024-01-16T22:39:35.609589Z","end":"2024-01-16T22:39:36.034335Z","steps":["trace[287718364] 'agreement among raft nodes before linearized reading'  (duration: 424.521916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:39:36.034379Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:39:35.609578Z","time spent":"424.79539ms","remote":"127.0.0.1:38166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":19,"response size":86865,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-01-16T22:39:36.034504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.491356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-16T22:39:36.034543Z","caller":"traceutil/trace.go:171","msg":"trace[953459536] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1087; }","duration":"180.528471ms","start":"2024-01-16T22:39:35.854007Z","end":"2024-01-16T22:39:36.034536Z","steps":["trace[953459536] 'agreement among raft nodes before linearized reading'  (duration: 180.471313ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T22:39:49.436475Z","caller":"traceutil/trace.go:171","msg":"trace[247807891] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"137.797054ms","start":"2024-01-16T22:39:49.298654Z","end":"2024-01-16T22:39:49.436451Z","steps":["trace[247807891] 'process raft request'  (duration: 137.613749ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T22:39:54.972703Z","caller":"traceutil/trace.go:171","msg":"trace[760320984] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"359.39513ms","start":"2024-01-16T22:39:54.613293Z","end":"2024-01-16T22:39:54.972688Z","steps":["trace[760320984] 'process raft request'  (duration: 359.08614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:39:54.972993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:39:54.613274Z","time spent":"359.54801ms","remote":"127.0.0.1:38186","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-033244\" mod_revision:1147 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-033244\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-033244\" > >"}
	{"level":"warn","ts":"2024-01-16T22:40:56.799041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.638238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-16T22:40:56.79921Z","caller":"traceutil/trace.go:171","msg":"trace[1215886885] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1371; }","duration":"158.737728ms","start":"2024-01-16T22:40:56.640374Z","end":"2024-01-16T22:40:56.799112Z","steps":["trace[1215886885] 'range keys from in-memory index tree'  (duration: 158.544847ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T22:41:10.883821Z","caller":"traceutil/trace.go:171","msg":"trace[1169885810] transaction","detail":"{read_only:false; response_revision:1454; number_of_response:1; }","duration":"218.947035ms","start":"2024-01-16T22:41:10.664847Z","end":"2024-01-16T22:41:10.883794Z","steps":["trace[1169885810] 'process raft request'  (duration: 218.712449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:41:20.597905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:41:20.169641Z","time spent":"428.260062ms","remote":"127.0.0.1:38130","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-01-16T22:41:20.598488Z","caller":"traceutil/trace.go:171","msg":"trace[74944467] linearizableReadLoop","detail":"{readStateIndex:1657; appliedIndex:1657; }","duration":"365.932808ms","start":"2024-01-16T22:41:20.232539Z","end":"2024-01-16T22:41:20.598472Z","steps":["trace[74944467] 'read index received'  (duration: 365.927232ms)","trace[74944467] 'applied index is now lower than readState.Index'  (duration: 4.212µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T22:41:20.598764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.198767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2024-01-16T22:41:20.598826Z","caller":"traceutil/trace.go:171","msg":"trace[449230758] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1586; }","duration":"366.299438ms","start":"2024-01-16T22:41:20.232517Z","end":"2024-01-16T22:41:20.598817Z","steps":["trace[449230758] 'agreement among raft nodes before linearized reading'  (duration: 366.111197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T22:41:20.598891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T22:41:20.232505Z","time spent":"366.37859ms","remote":"127.0.0.1:38166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3776,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	
	
	==> gcp-auth [6d02b5d6222942b70cd02536f575dba4a013544f97ff1aba60dda0f364964554] <==
	2024/01/16 22:40:45 GCP Auth Webhook started!
	2024/01/16 22:40:52 Ready to marshal response ...
	2024/01/16 22:40:52 Ready to write response ...
	2024/01/16 22:40:57 Ready to marshal response ...
	2024/01/16 22:40:57 Ready to write response ...
	2024/01/16 22:40:58 Ready to marshal response ...
	2024/01/16 22:40:58 Ready to write response ...
	2024/01/16 22:40:59 Ready to marshal response ...
	2024/01/16 22:40:59 Ready to write response ...
	2024/01/16 22:40:59 Ready to marshal response ...
	2024/01/16 22:40:59 Ready to write response ...
	2024/01/16 22:41:13 Ready to marshal response ...
	2024/01/16 22:41:13 Ready to write response ...
	2024/01/16 22:41:13 Ready to marshal response ...
	2024/01/16 22:41:13 Ready to write response ...
	2024/01/16 22:41:13 Ready to marshal response ...
	2024/01/16 22:41:13 Ready to write response ...
	2024/01/16 22:41:26 Ready to marshal response ...
	2024/01/16 22:41:26 Ready to write response ...
	2024/01/16 22:41:27 Ready to marshal response ...
	2024/01/16 22:41:27 Ready to write response ...
	2024/01/16 22:41:30 Ready to marshal response ...
	2024/01/16 22:41:30 Ready to write response ...
	2024/01/16 22:43:48 Ready to marshal response ...
	2024/01/16 22:43:48 Ready to write response ...
	
	
	==> kernel <==
	 22:44:00 up 6 min,  0 users,  load average: 0.35, 1.33, 0.80
	Linux addons-033244 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [539fb6f69b419ec8ed74b39468d5fea1b243a93e24fed4d17fd62fb6501422d6] <==
	I0116 22:41:18.728471       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0116 22:41:18.734825       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0116 22:41:19.768313       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 22:41:26.609275       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0116 22:41:26.797249       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.204.116"}
	I0116 22:41:50.003009       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.003075       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.018905       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.019008       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.035594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.035660       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.057560       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.058338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.059779       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.059844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.081073       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.081187       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.086629       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.086708       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 22:41:50.098495       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 22:41:50.098570       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 22:41:51.058028       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 22:41:51.099033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 22:41:51.110355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 22:43:49.081866       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.176.47"}
	
	
	==> kube-controller-manager [9a776c6c37d4db37443dcd1ae504caded32d63813eb6541e5805d0a1ceca319f] <==
	W0116 22:42:34.842685       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:42:34.842789       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 22:42:40.891992       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:42:40.892097       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 22:43:03.161041       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:43:03.161337       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 22:43:13.677941       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:43:13.678045       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 22:43:16.605651       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:43:16.605702       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 22:43:21.904789       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:43:21.904903       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 22:43:48.834502       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 22:43:48.891482       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-bf89p"
	I0116 22:43:48.910411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.502642ms"
	I0116 22:43:48.924719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.204706ms"
	I0116 22:43:48.925027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.104µs"
	I0116 22:43:48.931856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.794µs"
	I0116 22:43:51.880936       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 22:43:51.891248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="7.323µs"
	I0116 22:43:51.916701       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0116 22:43:52.775934       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 22:43:52.776091       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 22:43:52.935999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.946296ms"
	I0116 22:43:52.936097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.93µs"
	
	
	==> kube-proxy [9cc2f381283f8f1512f3b2f553133a933c5bdb343ab247d7d2c7abfff18f47fa] <==
	I0116 22:38:32.653588       1 server_others.go:69] "Using iptables proxy"
	I0116 22:38:32.801636       1 node.go:141] Successfully retrieved node IP: 192.168.39.234
	I0116 22:38:33.360304       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 22:38:33.360342       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 22:38:33.376730       1 server_others.go:152] "Using iptables Proxier"
	I0116 22:38:33.376875       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 22:38:33.377041       1 server.go:846] "Version info" version="v1.28.4"
	I0116 22:38:33.377070       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 22:38:33.392632       1 config.go:315] "Starting node config controller"
	I0116 22:38:33.392695       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 22:38:33.392944       1 config.go:188] "Starting service config controller"
	I0116 22:38:33.393066       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 22:38:33.411819       1 config.go:97] "Starting endpoint slice config controller"
	I0116 22:38:33.411853       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 22:38:33.411950       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 22:38:33.496764       1 shared_informer.go:318] Caches are synced for node config
	I0116 22:38:33.509329       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a3e64d151ec803f8393e4f80358e91d42401aa140351017c6b283098ed0d0404] <==
	W0116 22:37:59.266280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 22:37:59.266307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 22:37:59.269935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 22:37:59.270104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 22:37:59.282619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 22:37:59.282699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 22:37:59.294724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 22:37:59.294975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 22:37:59.317482       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 22:37:59.317531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 22:37:59.363472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 22:37:59.363495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 22:37:59.449751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 22:37:59.449885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 22:37:59.465408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 22:37:59.465458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 22:37:59.482592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 22:37:59.482695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 22:37:59.515056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 22:37:59.515469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 22:37:59.533225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 22:37:59.533270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 22:37:59.661646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 22:37:59.661693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0116 22:38:00.015537       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 22:37:30 UTC, ends at Tue 2024-01-16 22:44:00 UTC. --
	Jan 16 22:43:48 addons-033244 kubelet[1254]: I0116 22:43:48.899715    1254 memory_manager.go:346] "RemoveStaleState removing state" podUID="a047b2c2-4066-45a4-94f5-ed447633b65a" containerName="hostpath"
	Jan 16 22:43:48 addons-033244 kubelet[1254]: I0116 22:43:48.899721    1254 memory_manager.go:346] "RemoveStaleState removing state" podUID="a047b2c2-4066-45a4-94f5-ed447633b65a" containerName="csi-provisioner"
	Jan 16 22:43:48 addons-033244 kubelet[1254]: I0116 22:43:48.976054    1254 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dlq\" (UniqueName: \"kubernetes.io/projected/4c56cf2d-2d36-4e94-9aa4-0eda074c6d30-kube-api-access-n8dlq\") pod \"hello-world-app-5d77478584-bf89p\" (UID: \"4c56cf2d-2d36-4e94-9aa4-0eda074c6d30\") " pod="default/hello-world-app-5d77478584-bf89p"
	Jan 16 22:43:48 addons-033244 kubelet[1254]: I0116 22:43:48.976116    1254 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4c56cf2d-2d36-4e94-9aa4-0eda074c6d30-gcp-creds\") pod \"hello-world-app-5d77478584-bf89p\" (UID: \"4c56cf2d-2d36-4e94-9aa4-0eda074c6d30\") " pod="default/hello-world-app-5d77478584-bf89p"
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.185512    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvwqz\" (UniqueName: \"kubernetes.io/projected/6be1d5f8-53b6-49da-8a81-309c4abeab7b-kube-api-access-cvwqz\") pod \"6be1d5f8-53b6-49da-8a81-309c4abeab7b\" (UID: \"6be1d5f8-53b6-49da-8a81-309c4abeab7b\") "
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.190817    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be1d5f8-53b6-49da-8a81-309c4abeab7b-kube-api-access-cvwqz" (OuterVolumeSpecName: "kube-api-access-cvwqz") pod "6be1d5f8-53b6-49da-8a81-309c4abeab7b" (UID: "6be1d5f8-53b6-49da-8a81-309c4abeab7b"). InnerVolumeSpecName "kube-api-access-cvwqz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.285829    1254 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cvwqz\" (UniqueName: \"kubernetes.io/projected/6be1d5f8-53b6-49da-8a81-309c4abeab7b-kube-api-access-cvwqz\") on node \"addons-033244\" DevicePath \"\""
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.886888    1254 scope.go:117] "RemoveContainer" containerID="9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92"
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.931328    1254 scope.go:117] "RemoveContainer" containerID="9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92"
	Jan 16 22:43:50 addons-033244 kubelet[1254]: E0116 22:43:50.932048    1254 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92\": container with ID starting with 9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92 not found: ID does not exist" containerID="9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92"
	Jan 16 22:43:50 addons-033244 kubelet[1254]: I0116 22:43:50.932095    1254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92"} err="failed to get container status \"9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92\": rpc error: code = NotFound desc = could not find container \"9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92\": container with ID starting with 9f928fc3c7302ace5132df3990a5e23477e80a9d6e38936753949777e5727f92 not found: ID does not exist"
	Jan 16 22:43:51 addons-033244 kubelet[1254]: I0116 22:43:51.787699    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6be1d5f8-53b6-49da-8a81-309c4abeab7b" path="/var/lib/kubelet/pods/6be1d5f8-53b6-49da-8a81-309c4abeab7b/volumes"
	Jan 16 22:43:53 addons-033244 kubelet[1254]: I0116 22:43:53.786529    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3c278637-fa95-4701-894a-78833f5db5aa" path="/var/lib/kubelet/pods/3c278637-fa95-4701-894a-78833f5db5aa/volumes"
	Jan 16 22:43:53 addons-033244 kubelet[1254]: I0116 22:43:53.786918    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="466d023b-86c9-43a4-ab40-e9eac3a10b17" path="/var/lib/kubelet/pods/466d023b-86c9-43a4-ab40-e9eac3a10b17/volumes"
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.225872    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a78528d4-485a-4004-881f-691935f77dd0-webhook-cert\") pod \"a78528d4-485a-4004-881f-691935f77dd0\" (UID: \"a78528d4-485a-4004-881f-691935f77dd0\") "
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.225957    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgskq\" (UniqueName: \"kubernetes.io/projected/a78528d4-485a-4004-881f-691935f77dd0-kube-api-access-mgskq\") pod \"a78528d4-485a-4004-881f-691935f77dd0\" (UID: \"a78528d4-485a-4004-881f-691935f77dd0\") "
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.229495    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78528d4-485a-4004-881f-691935f77dd0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a78528d4-485a-4004-881f-691935f77dd0" (UID: "a78528d4-485a-4004-881f-691935f77dd0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.230972    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78528d4-485a-4004-881f-691935f77dd0-kube-api-access-mgskq" (OuterVolumeSpecName: "kube-api-access-mgskq") pod "a78528d4-485a-4004-881f-691935f77dd0" (UID: "a78528d4-485a-4004-881f-691935f77dd0"). InnerVolumeSpecName "kube-api-access-mgskq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.326788    1254 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mgskq\" (UniqueName: \"kubernetes.io/projected/a78528d4-485a-4004-881f-691935f77dd0-kube-api-access-mgskq\") on node \"addons-033244\" DevicePath \"\""
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.326850    1254 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a78528d4-485a-4004-881f-691935f77dd0-webhook-cert\") on node \"addons-033244\" DevicePath \"\""
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.788499    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a78528d4-485a-4004-881f-691935f77dd0" path="/var/lib/kubelet/pods/a78528d4-485a-4004-881f-691935f77dd0/volumes"
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.924290    1254 scope.go:117] "RemoveContainer" containerID="17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9"
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.941677    1254 scope.go:117] "RemoveContainer" containerID="17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9"
	Jan 16 22:43:55 addons-033244 kubelet[1254]: E0116 22:43:55.942197    1254 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9\": container with ID starting with 17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9 not found: ID does not exist" containerID="17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9"
	Jan 16 22:43:55 addons-033244 kubelet[1254]: I0116 22:43:55.942253    1254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9"} err="failed to get container status \"17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9\": rpc error: code = NotFound desc = could not find container \"17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9\": container with ID starting with 17b22d9234cff03f12d2f81416c68a65eed5eb73674e2ad0b851234f1a919ae9 not found: ID does not exist"
	
	
	==> storage-provisioner [ac684c8d49882265ac05a14bcabb28beb9b3aa75ce7867d94bf1ecd8fded1da0] <==
	I0116 22:38:36.272643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 22:38:36.434519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 22:38:36.440956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 22:38:36.628337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 22:38:36.631252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-033244_80aa27fa-60d0-4608-a5a9-54ab44adfbb9!
	I0116 22:38:36.678886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4886c8f8-5abc-4903-adae-1a7225e74e57", APIVersion:"v1", ResourceVersion:"878", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-033244_80aa27fa-60d0-4608-a5a9-54ab44adfbb9 became leader
	I0116 22:38:36.836331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-033244_80aa27fa-60d0-4608-a5a9-54ab44adfbb9!
	E0116 22:41:42.383493       1 controller.go:1050] claim "46f39136-4089-49d8-afe9-d29e8ee0f3c5" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-033244 -n addons-033244
helpers_test.go:261: (dbg) Run:  kubectl --context addons-033244 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-033244
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-033244: exit status 82 (2m1.729988294s)

                                                
                                                
-- stdout --
	* Stopping node "addons-033244"  ...
	* Stopping node "addons-033244"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-033244" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-033244
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-033244: exit status 11 (21.57873892s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-033244" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-033244
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-033244: exit status 11 (6.142788677s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-033244" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-033244
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-033244: exit status 11 (6.144792958s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-033244" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-264702 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-264702 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.011356797s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-264702 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-264702 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.003604083s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 22:55:47.135964   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:56:00.968258   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:00.973539   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:00.983782   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:01.004125   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:01.044410   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:01.124747   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:01.285251   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:01.606026   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:02.246965   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:03.527459   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:06.088369   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:56:11.209422   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-264702 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.273552043s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-264702 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.47
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons disable ingress-dns --alsologtostderr -v=1
E0116 22:56:14.819103   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:56:21.449890   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons disable ingress-dns --alsologtostderr -v=1: (9.388804143s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons disable ingress --alsologtostderr -v=1: (7.528864414s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-264702 -n ingress-addon-legacy-264702
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-264702 logs -n 25: (1.094035779s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-949292                                                 | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                             |         |         |                     |                     |
	| ssh            | functional-949292 ssh findmnt                                        | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC |                     |
	|                | -T /mount1                                                           |                             |         |         |                     |                     |
	| mount          | -p functional-949292                                                 | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                             |         |         |                     |                     |
	| ssh            | functional-949292 ssh findmnt                                        | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | -T /mount1                                                           |                             |         |         |                     |                     |
	| ssh            | functional-949292 ssh findmnt                                        | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | -T /mount2                                                           |                             |         |         |                     |                     |
	| ssh            | functional-949292 ssh findmnt                                        | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | -T /mount3                                                           |                             |         |         |                     |                     |
	| update-context | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | update-context                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                             |         |         |                     |                     |
	| update-context | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | update-context                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                             |         |         |                     |                     |
	| mount          | -p functional-949292                                                 | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC |                     |
	|                | --kill=true                                                          |                             |         |         |                     |                     |
	| update-context | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | update-context                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                             |         |         |                     |                     |
	| image          | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | image ls --format short                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                             |         |         |                     |                     |
	| image          | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | image ls --format yaml                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                             |         |         |                     |                     |
	| ssh            | functional-949292 ssh pgrep                                          | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC |                     |
	|                | buildkitd                                                            |                             |         |         |                     |                     |
	| image          | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | image ls --format json                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                             |         |         |                     |                     |
	| image          | functional-949292                                                    | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | image ls --format table                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                             |         |         |                     |                     |
	| image          | functional-949292 image build -t                                     | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	|                | localhost/my-image:functional-949292                                 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                             |         |         |                     |                     |
	| image          | functional-949292 image ls                                           | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	| delete         | -p functional-949292                                                 | functional-949292           | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:51 UTC |
	| start          | -p ingress-addon-legacy-264702                                       | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:51 UTC | 16 Jan 24 22:53 UTC |
	|                | --kubernetes-version=v1.18.20                                        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-264702                                          | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:53 UTC | 16 Jan 24 22:53 UTC |
	|                | addons enable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-264702                                          | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:53 UTC | 16 Jan 24 22:53 UTC |
	|                | addons enable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-264702                                          | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:54 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-264702 ip                                       | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:56 UTC | 16 Jan 24 22:56 UTC |
	| addons         | ingress-addon-legacy-264702                                          | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:56 UTC | 16 Jan 24 22:56 UTC |
	|                | addons disable ingress-dns                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-264702                                          | ingress-addon-legacy-264702 | jenkins | v1.32.0 | 16 Jan 24 22:56 UTC | 16 Jan 24 22:56 UTC |
	|                | addons disable ingress                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                             |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 22:51:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 22:51:52.017560   23993 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:51:52.017683   23993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:52.017692   23993 out.go:309] Setting ErrFile to fd 2...
	I0116 22:51:52.017697   23993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:52.017907   23993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:51:52.018506   23993 out.go:303] Setting JSON to false
	I0116 22:51:52.019425   23993 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2058,"bootTime":1705443454,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:51:52.019491   23993 start.go:138] virtualization: kvm guest
	I0116 22:51:52.021560   23993 out.go:177] * [ingress-addon-legacy-264702] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:51:52.023382   23993 notify.go:220] Checking for updates...
	I0116 22:51:52.023385   23993 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 22:51:52.024702   23993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:51:52.025971   23993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:51:52.027384   23993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:51:52.028678   23993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 22:51:52.029937   23993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 22:51:52.031309   23993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:51:52.065727   23993 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 22:51:52.067241   23993 start.go:298] selected driver: kvm2
	I0116 22:51:52.067252   23993 start.go:902] validating driver "kvm2" against <nil>
	I0116 22:51:52.067262   23993 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 22:51:52.067964   23993 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:51:52.068033   23993 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 22:51:52.082279   23993 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 22:51:52.082327   23993 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 22:51:52.082538   23993 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 22:51:52.082600   23993 cni.go:84] Creating CNI manager for ""
	I0116 22:51:52.082612   23993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:51:52.082625   23993 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 22:51:52.082636   23993 start_flags.go:321] config:
	{Name:ingress-addon-legacy-264702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-264702 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:51:52.082772   23993 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:51:52.084733   23993 out.go:177] * Starting control plane node ingress-addon-legacy-264702 in cluster ingress-addon-legacy-264702
	I0116 22:51:52.086187   23993 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 22:51:52.520079   23993 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 22:51:52.520124   23993 cache.go:56] Caching tarball of preloaded images
	I0116 22:51:52.520263   23993 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 22:51:52.522146   23993 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 22:51:52.523638   23993 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:51:52.620189   23993 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 22:52:05.895472   23993 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:52:05.895563   23993 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:52:06.873018   23993 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 22:52:06.873347   23993 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/config.json ...
	I0116 22:52:06.873376   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/config.json: {Name:mk1edd0b62bca7787e245414303ac9eb6a446583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:06.873548   23993 start.go:365] acquiring machines lock for ingress-addon-legacy-264702: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 22:52:06.873580   23993 start.go:369] acquired machines lock for "ingress-addon-legacy-264702" in 17.049µs
	I0116 22:52:06.873598   23993 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-264702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-264702 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 22:52:06.873688   23993 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 22:52:06.875817   23993 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0116 22:52:06.875959   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:52:06.875983   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:52:06.889392   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0116 22:52:06.889785   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:52:06.890342   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:52:06.890364   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:52:06.890695   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:52:06.890894   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetMachineName
	I0116 22:52:06.891019   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:06.891175   23993 start.go:159] libmachine.API.Create for "ingress-addon-legacy-264702" (driver="kvm2")
	I0116 22:52:06.891239   23993 client.go:168] LocalClient.Create starting
	I0116 22:52:06.891274   23993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem
	I0116 22:52:06.891311   23993 main.go:141] libmachine: Decoding PEM data...
	I0116 22:52:06.891333   23993 main.go:141] libmachine: Parsing certificate...
	I0116 22:52:06.891397   23993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem
	I0116 22:52:06.891425   23993 main.go:141] libmachine: Decoding PEM data...
	I0116 22:52:06.891450   23993 main.go:141] libmachine: Parsing certificate...
	I0116 22:52:06.891480   23993 main.go:141] libmachine: Running pre-create checks...
	I0116 22:52:06.891496   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .PreCreateCheck
	I0116 22:52:06.891772   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetConfigRaw
	I0116 22:52:06.892151   23993 main.go:141] libmachine: Creating machine...
	I0116 22:52:06.892168   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Create
	I0116 22:52:06.892299   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Creating KVM machine...
	I0116 22:52:06.893550   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found existing default KVM network
	I0116 22:52:06.894199   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:06.894050   24051 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I0116 22:52:06.899338   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | trying to create private KVM network mk-ingress-addon-legacy-264702 192.168.39.0/24...
	I0116 22:52:06.968607   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | private KVM network mk-ingress-addon-legacy-264702 192.168.39.0/24 created
	I0116 22:52:06.968698   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:06.968562   24051 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:52:06.968741   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting up store path in /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702 ...
	I0116 22:52:06.968776   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Building disk image from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 22:52:06.968795   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Downloading /home/jenkins/minikube-integration/17975-6238/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 22:52:07.170713   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:07.170557   24051 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa...
	I0116 22:52:07.210382   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:07.210221   24051 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/ingress-addon-legacy-264702.rawdisk...
	I0116 22:52:07.210413   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Writing magic tar header
	I0116 22:52:07.210433   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Writing SSH key tar header
	I0116 22:52:07.210442   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:07.210360   24051 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702 ...
	I0116 22:52:07.210454   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702
	I0116 22:52:07.210515   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702 (perms=drwx------)
	I0116 22:52:07.210537   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines (perms=drwxr-xr-x)
	I0116 22:52:07.210548   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines
	I0116 22:52:07.210563   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube (perms=drwxr-xr-x)
	I0116 22:52:07.210584   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238 (perms=drwxrwxr-x)
	I0116 22:52:07.210597   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 22:52:07.210609   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 22:52:07.210620   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Creating domain...
	I0116 22:52:07.210630   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:52:07.210646   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238
	I0116 22:52:07.210660   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 22:52:07.210674   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home/jenkins
	I0116 22:52:07.210687   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Checking permissions on dir: /home
	I0116 22:52:07.210704   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Skipping /home - not owner
	I0116 22:52:07.211711   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) define libvirt domain using xml: 
	I0116 22:52:07.211738   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) <domain type='kvm'>
	I0116 22:52:07.211751   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <name>ingress-addon-legacy-264702</name>
	I0116 22:52:07.211772   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <memory unit='MiB'>4096</memory>
	I0116 22:52:07.211786   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <vcpu>2</vcpu>
	I0116 22:52:07.211802   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <features>
	I0116 22:52:07.211812   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <acpi/>
	I0116 22:52:07.211820   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <apic/>
	I0116 22:52:07.211827   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <pae/>
	I0116 22:52:07.211834   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     
	I0116 22:52:07.211841   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   </features>
	I0116 22:52:07.211849   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <cpu mode='host-passthrough'>
	I0116 22:52:07.211855   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   
	I0116 22:52:07.211868   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   </cpu>
	I0116 22:52:07.211876   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <os>
	I0116 22:52:07.211886   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <type>hvm</type>
	I0116 22:52:07.211897   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <boot dev='cdrom'/>
	I0116 22:52:07.211906   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <boot dev='hd'/>
	I0116 22:52:07.211917   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <bootmenu enable='no'/>
	I0116 22:52:07.211925   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   </os>
	I0116 22:52:07.211936   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   <devices>
	I0116 22:52:07.211969   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <disk type='file' device='cdrom'>
	I0116 22:52:07.211997   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/boot2docker.iso'/>
	I0116 22:52:07.212014   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <target dev='hdc' bus='scsi'/>
	I0116 22:52:07.212026   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <readonly/>
	I0116 22:52:07.212040   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </disk>
	I0116 22:52:07.212053   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <disk type='file' device='disk'>
	I0116 22:52:07.212076   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 22:52:07.212103   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/ingress-addon-legacy-264702.rawdisk'/>
	I0116 22:52:07.212114   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <target dev='hda' bus='virtio'/>
	I0116 22:52:07.212122   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </disk>
	I0116 22:52:07.212129   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <interface type='network'>
	I0116 22:52:07.212139   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <source network='mk-ingress-addon-legacy-264702'/>
	I0116 22:52:07.212145   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <model type='virtio'/>
	I0116 22:52:07.212153   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </interface>
	I0116 22:52:07.212159   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <interface type='network'>
	I0116 22:52:07.212165   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <source network='default'/>
	I0116 22:52:07.212178   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <model type='virtio'/>
	I0116 22:52:07.212191   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </interface>
	I0116 22:52:07.212198   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <serial type='pty'>
	I0116 22:52:07.212204   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <target port='0'/>
	I0116 22:52:07.212213   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </serial>
	I0116 22:52:07.212221   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <console type='pty'>
	I0116 22:52:07.212228   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <target type='serial' port='0'/>
	I0116 22:52:07.212235   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </console>
	I0116 22:52:07.212244   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     <rng model='virtio'>
	I0116 22:52:07.212252   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)       <backend model='random'>/dev/random</backend>
	I0116 22:52:07.212263   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     </rng>
	I0116 22:52:07.212273   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     
	I0116 22:52:07.212282   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)     
	I0116 22:52:07.212288   23993 main.go:141] libmachine: (ingress-addon-legacy-264702)   </devices>
	I0116 22:52:07.212296   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) </domain>
	I0116 22:52:07.212301   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) 
	I0116 22:52:07.216430   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:d7:37:20 in network default
	I0116 22:52:07.216988   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Ensuring networks are active...
	I0116 22:52:07.217011   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:07.217635   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Ensuring network default is active
	I0116 22:52:07.218023   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Ensuring network mk-ingress-addon-legacy-264702 is active
	I0116 22:52:07.218510   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Getting domain xml...
	I0116 22:52:07.219123   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Creating domain...
	I0116 22:52:08.398719   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Waiting to get IP...
	I0116 22:52:08.399398   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.399776   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.399798   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:08.399746   24051 retry.go:31] will retry after 251.558252ms: waiting for machine to come up
	I0116 22:52:08.653164   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.653618   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.653647   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:08.653574   24051 retry.go:31] will retry after 336.388814ms: waiting for machine to come up
	I0116 22:52:08.990977   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.991455   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:08.991485   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:08.991408   24051 retry.go:31] will retry after 446.951223ms: waiting for machine to come up
	I0116 22:52:09.440012   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:09.440422   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:09.440446   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:09.440399   24051 retry.go:31] will retry after 594.084362ms: waiting for machine to come up
	I0116 22:52:10.036237   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:10.036681   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:10.036714   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:10.036636   24051 retry.go:31] will retry after 592.875798ms: waiting for machine to come up
	I0116 22:52:10.631558   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:10.631945   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:10.631970   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:10.631889   24051 retry.go:31] will retry after 663.603487ms: waiting for machine to come up
	I0116 22:52:11.296650   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:11.297174   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:11.297210   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:11.297107   24051 retry.go:31] will retry after 759.274064ms: waiting for machine to come up
	I0116 22:52:12.057918   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:12.058319   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:12.058358   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:12.058276   24051 retry.go:31] will retry after 1.424332858s: waiting for machine to come up
	I0116 22:52:13.484896   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:13.485415   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:13.485462   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:13.485377   24051 retry.go:31] will retry after 1.240190383s: waiting for machine to come up
	I0116 22:52:14.727330   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:14.727768   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:14.727807   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:14.727700   24051 retry.go:31] will retry after 1.696696704s: waiting for machine to come up
	I0116 22:52:16.426614   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:16.426940   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:16.426963   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:16.426900   24051 retry.go:31] will retry after 2.50563915s: waiting for machine to come up
	I0116 22:52:18.933728   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:18.934222   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:18.934254   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:18.934168   24051 retry.go:31] will retry after 3.285572558s: waiting for machine to come up
	I0116 22:52:22.221403   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:22.221781   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:22.221810   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:22.221742   24051 retry.go:31] will retry after 3.145370473s: waiting for machine to come up
	I0116 22:52:25.371357   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:25.371846   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find current IP address of domain ingress-addon-legacy-264702 in network mk-ingress-addon-legacy-264702
	I0116 22:52:25.371877   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | I0116 22:52:25.371764   24051 retry.go:31] will retry after 3.958979609s: waiting for machine to come up
	I0116 22:52:29.334205   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.334637   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Found IP for machine: 192.168.39.47
	I0116 22:52:29.334660   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Reserving static IP address...
	I0116 22:52:29.334689   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has current primary IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.335107   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-264702", mac: "52:54:00:ae:89:84", ip: "192.168.39.47"} in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.404791   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Getting to WaitForSSH function...
	I0116 22:52:29.404825   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Reserved static IP address: 192.168.39.47
	I0116 22:52:29.404841   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Waiting for SSH to be available...
	I0116 22:52:29.407704   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.408094   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.408127   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.408251   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Using SSH client type: external
	I0116 22:52:29.408283   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa (-rw-------)
	I0116 22:52:29.408320   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 22:52:29.408341   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | About to run SSH command:
	I0116 22:52:29.408358   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | exit 0
	I0116 22:52:29.494032   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | SSH cmd err, output: <nil>: 
	I0116 22:52:29.494373   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) KVM machine creation complete!
	I0116 22:52:29.494731   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetConfigRaw
	I0116 22:52:29.495265   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:29.495451   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:29.495617   23993 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 22:52:29.495632   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetState
	I0116 22:52:29.496734   23993 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 22:52:29.496748   23993 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 22:52:29.496754   23993 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 22:52:29.496761   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:29.498893   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.499255   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.499295   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.499395   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:29.499543   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.499708   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.499813   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:29.499925   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:29.500379   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:29.500399   23993 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 22:52:29.609325   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 22:52:29.609355   23993 main.go:141] libmachine: Detecting the provisioner...
	I0116 22:52:29.609370   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:29.611968   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.612293   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.612327   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.612450   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:29.612672   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.612862   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.612995   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:29.613179   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:29.613492   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:29.613516   23993 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 22:52:29.722515   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 22:52:29.722610   23993 main.go:141] libmachine: found compatible host: buildroot
	I0116 22:52:29.722627   23993 main.go:141] libmachine: Provisioning with buildroot...
	I0116 22:52:29.722639   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetMachineName
	I0116 22:52:29.722887   23993 buildroot.go:166] provisioning hostname "ingress-addon-legacy-264702"
	I0116 22:52:29.722915   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetMachineName
	I0116 22:52:29.723103   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:29.725505   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.725855   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.725882   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.725983   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:29.726149   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.726280   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.726409   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:29.726583   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:29.726906   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:29.726925   23993 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-264702 && echo "ingress-addon-legacy-264702" | sudo tee /etc/hostname
	I0116 22:52:29.845039   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-264702
	
	I0116 22:52:29.845072   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:29.847689   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.848020   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.848056   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.848198   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:29.848386   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.848557   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:29.848709   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:29.848835   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:29.849156   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:29.849182   23993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-264702' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-264702/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-264702' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 22:52:29.966260   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 22:52:29.966289   23993 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 22:52:29.966305   23993 buildroot.go:174] setting up certificates
	I0116 22:52:29.966315   23993 provision.go:83] configureAuth start
	I0116 22:52:29.966324   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetMachineName
	I0116 22:52:29.966588   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetIP
	I0116 22:52:29.969218   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.969574   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.969609   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.969766   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:29.971794   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.972107   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:29.972149   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:29.972274   23993 provision.go:138] copyHostCerts
	I0116 22:52:29.972300   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 22:52:29.972329   23993 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 22:52:29.972341   23993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 22:52:29.972422   23993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 22:52:29.972511   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 22:52:29.972545   23993 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 22:52:29.972555   23993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 22:52:29.972592   23993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 22:52:29.972701   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 22:52:29.972728   23993 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 22:52:29.972735   23993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 22:52:29.972771   23993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 22:52:29.972835   23993 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-264702 san=[192.168.39.47 192.168.39.47 localhost 127.0.0.1 minikube ingress-addon-legacy-264702]
	I0116 22:52:30.048716   23993 provision.go:172] copyRemoteCerts
	I0116 22:52:30.048772   23993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 22:52:30.048795   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.051450   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.051830   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.051856   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.052060   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.052252   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.052406   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.052539   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:52:30.135342   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 22:52:30.135418   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 22:52:30.156035   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 22:52:30.156113   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 22:52:30.176569   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 22:52:30.176643   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 22:52:30.197196   23993 provision.go:86] duration metric: configureAuth took 230.86993ms
	I0116 22:52:30.197223   23993 buildroot.go:189] setting minikube options for container-runtime
	I0116 22:52:30.197379   23993 config.go:182] Loaded profile config "ingress-addon-legacy-264702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 22:52:30.197446   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.199904   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.200332   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.200372   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.200576   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.200756   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.200936   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.201076   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.201254   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:30.201567   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:30.201583   23993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 22:52:30.491664   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 22:52:30.491699   23993 main.go:141] libmachine: Checking connection to Docker...
	I0116 22:52:30.491710   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetURL
	I0116 22:52:30.492974   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Using libvirt version 6000000
	I0116 22:52:30.495132   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.495468   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.495496   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.495655   23993 main.go:141] libmachine: Docker is up and running!
	I0116 22:52:30.495667   23993 main.go:141] libmachine: Reticulating splines...
	I0116 22:52:30.495673   23993 client.go:171] LocalClient.Create took 23.604424043s
	I0116 22:52:30.495696   23993 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-264702" took 23.604521364s
	I0116 22:52:30.495715   23993 start.go:300] post-start starting for "ingress-addon-legacy-264702" (driver="kvm2")
	I0116 22:52:30.495732   23993 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 22:52:30.495761   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:30.495973   23993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 22:52:30.495993   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.498225   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.498573   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.498605   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.498741   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.498902   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.499051   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.499151   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:52:30.582734   23993 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 22:52:30.586442   23993 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 22:52:30.586463   23993 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 22:52:30.586525   23993 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 22:52:30.586597   23993 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 22:52:30.586607   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /etc/ssl/certs/149302.pem
	I0116 22:52:30.586687   23993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 22:52:30.594216   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 22:52:30.614896   23993 start.go:303] post-start completed in 119.162147ms
	I0116 22:52:30.614941   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetConfigRaw
	I0116 22:52:30.615482   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetIP
	I0116 22:52:30.618034   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.618379   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.618421   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.618607   23993 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/config.json ...
	I0116 22:52:30.618794   23993 start.go:128] duration metric: createHost completed in 23.745096685s
	I0116 22:52:30.618819   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.620982   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.621289   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.621320   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.621446   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.621644   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.621798   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.621945   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.622084   23993 main.go:141] libmachine: Using SSH client type: native
	I0116 22:52:30.622400   23993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0116 22:52:30.622411   23993 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 22:52:30.730601   23993 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705445550.704653493
	
	I0116 22:52:30.730623   23993 fix.go:206] guest clock: 1705445550.704653493
	I0116 22:52:30.730631   23993 fix.go:219] Guest: 2024-01-16 22:52:30.704653493 +0000 UTC Remote: 2024-01-16 22:52:30.618805982 +0000 UTC m=+38.652051810 (delta=85.847511ms)
	I0116 22:52:30.730649   23993 fix.go:190] guest clock delta is within tolerance: 85.847511ms
	I0116 22:52:30.730654   23993 start.go:83] releasing machines lock for "ingress-addon-legacy-264702", held for 23.857064987s
	I0116 22:52:30.730676   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:30.730982   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetIP
	I0116 22:52:30.733447   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.733889   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.733926   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.734034   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:30.734568   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:30.734742   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:52:30.734818   23993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 22:52:30.734861   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.734984   23993 ssh_runner.go:195] Run: cat /version.json
	I0116 22:52:30.735026   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:52:30.737560   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.737714   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.737920   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.737946   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.738103   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.738111   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:30.738137   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:30.738249   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:52:30.738317   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.738405   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:52:30.738537   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.738542   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:52:30.738692   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:52:30.738692   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:52:30.856324   23993 ssh_runner.go:195] Run: systemctl --version
	I0116 22:52:30.861711   23993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 22:52:31.014692   23993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 22:52:31.020673   23993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 22:52:31.020743   23993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 22:52:31.033655   23993 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 22:52:31.033689   23993 start.go:475] detecting cgroup driver to use...
	I0116 22:52:31.033746   23993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 22:52:31.045915   23993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 22:52:31.057238   23993 docker.go:217] disabling cri-docker service (if available) ...
	I0116 22:52:31.057300   23993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 22:52:31.068528   23993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 22:52:31.080078   23993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 22:52:31.180431   23993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 22:52:31.297973   23993 docker.go:233] disabling docker service ...
	I0116 22:52:31.298047   23993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 22:52:31.310930   23993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 22:52:31.321562   23993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 22:52:31.432866   23993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 22:52:31.545733   23993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 22:52:31.557808   23993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 22:52:31.573596   23993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 22:52:31.573659   23993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:52:31.581832   23993 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 22:52:31.581892   23993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:52:31.589870   23993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:52:31.597777   23993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 22:52:31.605874   23993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 22:52:31.614359   23993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 22:52:31.621544   23993 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 22:52:31.621597   23993 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 22:52:31.633158   23993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 22:52:31.640662   23993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 22:52:31.745570   23993 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 22:52:31.912185   23993 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 22:52:31.912261   23993 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 22:52:31.917200   23993 start.go:543] Will wait 60s for crictl version
	I0116 22:52:31.917253   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:31.920794   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 22:52:31.957532   23993 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 22:52:31.957605   23993 ssh_runner.go:195] Run: crio --version
	I0116 22:52:31.999153   23993 ssh_runner.go:195] Run: crio --version
	I0116 22:52:32.039411   23993 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0116 22:52:32.041099   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetIP
	I0116 22:52:32.043684   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:32.044013   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:52:32.044032   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:52:32.044282   23993 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 22:52:32.047923   23993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 22:52:32.059709   23993 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 22:52:32.059757   23993 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 22:52:32.089709   23993 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 22:52:32.089834   23993 ssh_runner.go:195] Run: which lz4
	I0116 22:52:32.093305   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 22:52:32.093386   23993 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 22:52:32.096929   23993 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 22:52:32.096949   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0116 22:52:33.926259   23993 crio.go:444] Took 1.832890 seconds to copy over tarball
	I0116 22:52:33.926347   23993 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 22:52:36.872703   23993 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946323665s)
	I0116 22:52:36.872731   23993 crio.go:451] Took 2.946445 seconds to extract the tarball
	I0116 22:52:36.872742   23993 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 22:52:36.915006   23993 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 22:52:36.967863   23993 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 22:52:36.967889   23993 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 22:52:36.967983   23993 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 22:52:36.968045   23993 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 22:52:36.968076   23993 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 22:52:36.967998   23993 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 22:52:36.967982   23993 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 22:52:36.968119   23993 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 22:52:36.968133   23993 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 22:52:36.967987   23993 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 22:52:36.969337   23993 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 22:52:36.969355   23993 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 22:52:36.969355   23993 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 22:52:36.969343   23993 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 22:52:36.969341   23993 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 22:52:36.969340   23993 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 22:52:36.969382   23993 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 22:52:36.969406   23993 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 22:52:37.200818   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0116 22:52:37.204621   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0116 22:52:37.205911   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0116 22:52:37.219625   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0116 22:52:37.221288   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0116 22:52:37.259684   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 22:52:37.277321   23993 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0116 22:52:37.277370   23993 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 22:52:37.277422   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.288665   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 22:52:37.330798   23993 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0116 22:52:37.330847   23993 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 22:52:37.330900   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.356478   23993 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0116 22:52:37.356524   23993 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 22:52:37.356586   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.361483   23993 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0116 22:52:37.361529   23993 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 22:52:37.361588   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.375562   23993 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0116 22:52:37.375578   23993 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0116 22:52:37.375606   23993 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 22:52:37.375606   23993 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 22:52:37.375639   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 22:52:37.375650   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.375643   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.396552   23993 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0116 22:52:37.396606   23993 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 22:52:37.396651   23993 ssh_runner.go:195] Run: which crictl
	I0116 22:52:37.396660   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 22:52:37.396690   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 22:52:37.396652   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 22:52:37.396754   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 22:52:37.396774   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 22:52:37.477661   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 22:52:37.512084   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 22:52:37.512141   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 22:52:37.525576   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 22:52:37.525651   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0116 22:52:37.525723   23993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 22:52:37.525802   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0116 22:52:37.562035   23993 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0116 22:52:37.820405   23993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 22:52:37.963188   23993 cache_images.go:92] LoadImages completed in 995.280406ms
	W0116 22:52:37.963297   23993 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0116 22:52:37.963382   23993 ssh_runner.go:195] Run: crio config
	I0116 22:52:38.018437   23993 cni.go:84] Creating CNI manager for ""
	I0116 22:52:38.018464   23993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:52:38.018487   23993 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 22:52:38.018511   23993 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-264702 NodeName:ingress-addon-legacy-264702 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 22:52:38.018689   23993 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-264702"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 22:52:38.018791   23993 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-264702 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-264702 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 22:52:38.018845   23993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 22:52:38.027772   23993 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 22:52:38.027880   23993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 22:52:38.035981   23993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0116 22:52:38.050509   23993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 22:52:38.064800   23993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0116 22:52:38.079254   23993 ssh_runner.go:195] Run: grep 192.168.39.47	control-plane.minikube.internal$ /etc/hosts
	I0116 22:52:38.082869   23993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 22:52:38.093634   23993 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702 for IP: 192.168.39.47
	I0116 22:52:38.093667   23993 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.093835   23993 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 22:52:38.093874   23993 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 22:52:38.093918   23993 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key
	I0116 22:52:38.093930   23993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt with IP's: []
	I0116 22:52:38.209104   23993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt ...
	I0116 22:52:38.209130   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: {Name:mk68025b7cb5a11412cbb46ba98043c6353816af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.209273   23993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key ...
	I0116 22:52:38.209286   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key: {Name:mk9238557b1fe6532caff43093685dc814ed6dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.209364   23993 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key.82597fa3
	I0116 22:52:38.209380   23993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt.82597fa3 with IP's: [192.168.39.47 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 22:52:38.398798   23993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt.82597fa3 ...
	I0116 22:52:38.398826   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt.82597fa3: {Name:mkee38166237cd792c2162c784d248784b91fd7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.398979   23993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key.82597fa3 ...
	I0116 22:52:38.398993   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key.82597fa3: {Name:mkecf6e9d0de5deae638bf8930b50f7e3cdf0491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.399057   23993 certs.go:337] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt.82597fa3 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt
	I0116 22:52:38.399117   23993 certs.go:341] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key.82597fa3 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key
	I0116 22:52:38.399170   23993 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.key
	I0116 22:52:38.399183   23993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.crt with IP's: []
	I0116 22:52:38.584803   23993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.crt ...
	I0116 22:52:38.584832   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.crt: {Name:mkdfc2c0e1b246c284e611bc7c58bc27e4e01cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.584979   23993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.key ...
	I0116 22:52:38.584992   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.key: {Name:mk236a0d9a767859da1fa6012e1357f2b9acd120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:52:38.585066   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 22:52:38.585083   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 22:52:38.585097   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 22:52:38.585109   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 22:52:38.585118   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 22:52:38.585130   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 22:52:38.585144   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 22:52:38.585156   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 22:52:38.585208   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 22:52:38.585244   23993 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 22:52:38.585255   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 22:52:38.585282   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 22:52:38.585308   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 22:52:38.585333   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 22:52:38.585373   23993 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 22:52:38.585399   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:52:38.585415   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem -> /usr/share/ca-certificates/14930.pem
	I0116 22:52:38.585427   23993 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /usr/share/ca-certificates/149302.pem
	I0116 22:52:38.586039   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 22:52:38.608579   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 22:52:38.629230   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 22:52:38.650877   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 22:52:38.672143   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 22:52:38.694271   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 22:52:38.714563   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 22:52:38.735150   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 22:52:38.755556   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 22:52:38.775190   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 22:52:38.795211   23993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 22:52:38.814742   23993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 22:52:38.829392   23993 ssh_runner.go:195] Run: openssl version
	I0116 22:52:38.834460   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 22:52:38.844198   23993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 22:52:38.848365   23993 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 22:52:38.848412   23993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 22:52:38.853467   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 22:52:38.862549   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 22:52:38.872063   23993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:52:38.876203   23993 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:52:38.876249   23993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 22:52:38.881382   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 22:52:38.891369   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 22:52:38.901116   23993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 22:52:38.905274   23993 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 22:52:38.905321   23993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 22:52:38.910305   23993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 22:52:38.919468   23993 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 22:52:38.922991   23993 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 22:52:38.923042   23993 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-264702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-264702 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:52:38.923106   23993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 22:52:38.923154   23993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 22:52:38.961872   23993 cri.go:89] found id: ""
	I0116 22:52:38.961946   23993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 22:52:38.971604   23993 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 22:52:38.980512   23993 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 22:52:38.988937   23993 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 22:52:38.988976   23993 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 22:52:39.046143   23993 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 22:52:39.046220   23993 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 22:52:39.181645   23993 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 22:52:39.181812   23993 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 22:52:39.181946   23993 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 22:52:39.390307   23993 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 22:52:39.390517   23993 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 22:52:39.390586   23993 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 22:52:39.519201   23993 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 22:52:39.705370   23993 out.go:204]   - Generating certificates and keys ...
	I0116 22:52:39.705504   23993 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 22:52:39.705592   23993 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 22:52:39.705660   23993 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 22:52:39.722897   23993 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 22:52:39.814110   23993 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 22:52:40.160955   23993 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 22:52:40.313493   23993 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 22:52:40.313857   23993 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-264702 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0116 22:52:40.647137   23993 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 22:52:40.647300   23993 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-264702 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0116 22:52:40.844222   23993 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 22:52:41.047552   23993 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 22:52:41.180797   23993 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 22:52:41.180892   23993 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 22:52:41.284302   23993 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 22:52:41.532994   23993 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 22:52:41.758726   23993 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 22:52:41.829985   23993 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 22:52:41.830978   23993 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 22:52:41.832900   23993 out.go:204]   - Booting up control plane ...
	I0116 22:52:41.832986   23993 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 22:52:41.836645   23993 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 22:52:41.837958   23993 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 22:52:41.839420   23993 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 22:52:41.842143   23993 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 22:52:51.341409   23993 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503377 seconds
	I0116 22:52:51.341555   23993 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 22:52:51.359804   23993 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 22:52:51.882595   23993 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 22:52:51.882776   23993 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-264702 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 22:52:52.404737   23993 kubeadm.go:322] [bootstrap-token] Using token: 10gg1i.cr60u11hilzm69uu
	I0116 22:52:52.406321   23993 out.go:204]   - Configuring RBAC rules ...
	I0116 22:52:52.406476   23993 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 22:52:52.415695   23993 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 22:52:52.425761   23993 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 22:52:52.429563   23993 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 22:52:52.434257   23993 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 22:52:52.439738   23993 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 22:52:52.453218   23993 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 22:52:52.846867   23993 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 22:52:52.897834   23993 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 22:52:52.899768   23993 kubeadm.go:322] 
	I0116 22:52:52.899827   23993 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 22:52:52.899832   23993 kubeadm.go:322] 
	I0116 22:52:52.899969   23993 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 22:52:52.899993   23993 kubeadm.go:322] 
	I0116 22:52:52.900025   23993 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 22:52:52.900085   23993 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 22:52:52.900129   23993 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 22:52:52.900135   23993 kubeadm.go:322] 
	I0116 22:52:52.900177   23993 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 22:52:52.900255   23993 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 22:52:52.900321   23993 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 22:52:52.900339   23993 kubeadm.go:322] 
	I0116 22:52:52.900441   23993 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 22:52:52.900555   23993 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 22:52:52.900571   23993 kubeadm.go:322] 
	I0116 22:52:52.900656   23993 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 10gg1i.cr60u11hilzm69uu \
	I0116 22:52:52.900800   23993 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 22:52:52.900830   23993 kubeadm.go:322]     --control-plane 
	I0116 22:52:52.900839   23993 kubeadm.go:322] 
	I0116 22:52:52.900924   23993 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 22:52:52.900935   23993 kubeadm.go:322] 
	I0116 22:52:52.901007   23993 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 10gg1i.cr60u11hilzm69uu \
	I0116 22:52:52.901098   23993 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 22:52:52.901256   23993 kubeadm.go:322] W0116 22:52:39.029460     961 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 22:52:52.901402   23993 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 22:52:52.901578   23993 kubeadm.go:322] W0116 22:52:41.822694     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 22:52:52.901692   23993 kubeadm.go:322] W0116 22:52:41.824085     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 22:52:52.901713   23993 cni.go:84] Creating CNI manager for ""
	I0116 22:52:52.901720   23993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:52:52.903605   23993 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 22:52:52.905031   23993 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 22:52:52.913818   23993 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 22:52:52.929684   23993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 22:52:52.929806   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=ingress-addon-legacy-264702 minikube.k8s.io/updated_at=2024_01_16T22_52_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:52.929808   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:53.157178   23993 ops.go:34] apiserver oom_adj: -16
	I0116 22:52:53.157238   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:53.658154   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:54.157295   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:54.657631   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:55.157370   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:55.658012   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:56.157837   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:56.657449   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:57.157323   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:57.657525   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:58.157277   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:58.657417   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:59.158173   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:52:59.658103   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:00.157843   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:00.657857   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:01.158305   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:01.658003   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:02.157422   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:02.657434   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:03.157270   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:03.657893   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:04.157969   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:04.658113   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:05.158021   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:05.657875   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:06.157937   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:06.657943   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:07.157595   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:07.657461   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:08.157716   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:08.657952   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:09.157917   23993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 22:53:09.301316   23993 kubeadm.go:1088] duration metric: took 16.371607479s to wait for elevateKubeSystemPrivileges.
	I0116 22:53:09.301358   23993 kubeadm.go:406] StartCluster complete in 30.378321506s
	I0116 22:53:09.301382   23993 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:53:09.301464   23993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:53:09.302486   23993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 22:53:09.302741   23993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 22:53:09.302872   23993 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 22:53:09.302948   23993 config.go:182] Loaded profile config "ingress-addon-legacy-264702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 22:53:09.302955   23993 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-264702"
	I0116 22:53:09.302976   23993 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-264702"
	I0116 22:53:09.302988   23993 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-264702"
	I0116 22:53:09.303027   23993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-264702"
	I0116 22:53:09.303033   23993 host.go:66] Checking if "ingress-addon-legacy-264702" exists ...
	I0116 22:53:09.303406   23993 kapi.go:59] client config for ingress-addon-legacy-264702: &rest.Config{Host:"https://192.168.39.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 22:53:09.303551   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:53:09.303554   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:53:09.303676   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:53:09.303644   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:53:09.304128   23993 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 22:53:09.318303   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0116 22:53:09.318303   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0116 22:53:09.318725   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:53:09.318769   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:53:09.319180   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:53:09.319211   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:53:09.319261   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:53:09.319282   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:53:09.319560   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:53:09.319600   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:53:09.319777   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetState
	I0116 22:53:09.320164   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:53:09.320197   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:53:09.322121   23993 kapi.go:59] client config for ingress-addon-legacy-264702: &rest.Config{Host:"https://192.168.39.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 22:53:09.322391   23993 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-264702"
	I0116 22:53:09.322428   23993 host.go:66] Checking if "ingress-addon-legacy-264702" exists ...
	I0116 22:53:09.322702   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:53:09.322730   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:53:09.335348   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34875
	I0116 22:53:09.335766   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:53:09.336290   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:53:09.336317   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:53:09.336341   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0116 22:53:09.336659   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:53:09.336722   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:53:09.336833   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetState
	I0116 22:53:09.337142   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:53:09.337166   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:53:09.337631   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:53:09.338228   23993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:53:09.338267   23993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:53:09.338351   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:53:09.340299   23993 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 22:53:09.341743   23993 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 22:53:09.341766   23993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 22:53:09.341783   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:53:09.344420   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:53:09.344914   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:53:09.344946   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:53:09.345055   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:53:09.345223   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:53:09.345384   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:53:09.345516   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:53:09.353699   23993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0116 22:53:09.354140   23993 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:53:09.354617   23993 main.go:141] libmachine: Using API Version  1
	I0116 22:53:09.354644   23993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:53:09.354940   23993 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:53:09.355115   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetState
	I0116 22:53:09.356582   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .DriverName
	I0116 22:53:09.356832   23993 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 22:53:09.356851   23993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 22:53:09.356874   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHHostname
	I0116 22:53:09.358996   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:53:09.359397   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:89:84", ip: ""} in network mk-ingress-addon-legacy-264702: {Iface:virbr1 ExpiryTime:2024-01-16 23:52:21 +0000 UTC Type:0 Mac:52:54:00:ae:89:84 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ingress-addon-legacy-264702 Clientid:01:52:54:00:ae:89:84}
	I0116 22:53:09.359430   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | domain ingress-addon-legacy-264702 has defined IP address 192.168.39.47 and MAC address 52:54:00:ae:89:84 in network mk-ingress-addon-legacy-264702
	I0116 22:53:09.359557   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHPort
	I0116 22:53:09.359736   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHKeyPath
	I0116 22:53:09.359860   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .GetSSHUsername
	I0116 22:53:09.359975   23993 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/ingress-addon-legacy-264702/id_rsa Username:docker}
	I0116 22:53:09.482466   23993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 22:53:09.488292   23993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 22:53:09.540859   23993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 22:53:09.821272   23993 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-264702" context rescaled to 1 replicas
	I0116 22:53:09.821316   23993 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 22:53:09.823226   23993 out.go:177] * Verifying Kubernetes components...
	I0116 22:53:09.824539   23993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 22:53:10.105831   23993 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 22:53:10.122531   23993 main.go:141] libmachine: Making call to close driver server
	I0116 22:53:10.122553   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Close
	I0116 22:53:10.122593   23993 main.go:141] libmachine: Making call to close driver server
	I0116 22:53:10.122616   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Close
	I0116 22:53:10.122829   23993 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:53:10.122833   23993 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:53:10.122844   23993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:53:10.122847   23993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:53:10.122857   23993 main.go:141] libmachine: Making call to close driver server
	I0116 22:53:10.122865   23993 main.go:141] libmachine: Making call to close driver server
	I0116 22:53:10.122871   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Close
	I0116 22:53:10.122877   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Close
	I0116 22:53:10.123402   23993 kapi.go:59] client config for ingress-addon-legacy-264702: &rest.Config{Host:"https://192.168.39.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 22:53:10.123726   23993 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-264702" to be "Ready" ...
	I0116 22:53:10.123937   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Closing plugin on server side
	I0116 22:53:10.123957   23993 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:53:10.123955   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) DBG | Closing plugin on server side
	I0116 22:53:10.123967   23993 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:53:10.123972   23993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:53:10.123978   23993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:53:10.141092   23993 node_ready.go:49] node "ingress-addon-legacy-264702" has status "Ready":"True"
	I0116 22:53:10.141119   23993 node_ready.go:38] duration metric: took 17.362573ms waiting for node "ingress-addon-legacy-264702" to be "Ready" ...
	I0116 22:53:10.141128   23993 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 22:53:10.159579   23993 main.go:141] libmachine: Making call to close driver server
	I0116 22:53:10.159609   23993 main.go:141] libmachine: (ingress-addon-legacy-264702) Calling .Close
	I0116 22:53:10.159881   23993 main.go:141] libmachine: Successfully made call to close driver server
	I0116 22:53:10.159901   23993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 22:53:10.161569   23993 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 22:53:10.163473   23993 addons.go:505] enable addons completed in 860.60246ms: enabled=[storage-provisioner default-storageclass]
	I0116 22:53:10.162605   23993 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-szh5r" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:12.170579   23993 pod_ready.go:102] pod "coredns-66bff467f8-szh5r" in "kube-system" namespace has status "Ready":"False"
	I0116 22:53:13.170673   23993 pod_ready.go:92] pod "coredns-66bff467f8-szh5r" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.170695   23993 pod_ready.go:81] duration metric: took 3.007194698s waiting for pod "coredns-66bff467f8-szh5r" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.170703   23993 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.175878   23993 pod_ready.go:92] pod "etcd-ingress-addon-legacy-264702" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.175906   23993 pod_ready.go:81] duration metric: took 5.19419ms waiting for pod "etcd-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.175918   23993 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.180152   23993 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-264702" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.180189   23993 pod_ready.go:81] duration metric: took 4.254989ms waiting for pod "kube-apiserver-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.180207   23993 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.184199   23993 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-264702" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.184221   23993 pod_ready.go:81] duration metric: took 4.005735ms waiting for pod "kube-controller-manager-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.184233   23993 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c6wmq" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.188837   23993 pod_ready.go:92] pod "kube-proxy-c6wmq" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.188859   23993 pod_ready.go:81] duration metric: took 4.618088ms waiting for pod "kube-proxy-c6wmq" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.188870   23993 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.365268   23993 request.go:629] Waited for 176.335861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-264702
	I0116 22:53:13.564915   23993 request.go:629] Waited for 196.430723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ingress-addon-legacy-264702
	I0116 22:53:13.568076   23993 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-264702" in "kube-system" namespace has status "Ready":"True"
	I0116 22:53:13.568102   23993 pod_ready.go:81] duration metric: took 379.223969ms waiting for pod "kube-scheduler-ingress-addon-legacy-264702" in "kube-system" namespace to be "Ready" ...
	I0116 22:53:13.568118   23993 pod_ready.go:38] duration metric: took 3.426979733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 22:53:13.568136   23993 api_server.go:52] waiting for apiserver process to appear ...
	I0116 22:53:13.568205   23993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 22:53:13.580861   23993 api_server.go:72] duration metric: took 3.759505641s to wait for apiserver process to appear ...
	I0116 22:53:13.580886   23993 api_server.go:88] waiting for apiserver healthz status ...
	I0116 22:53:13.580915   23993 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0116 22:53:13.587322   23993 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0116 22:53:13.588345   23993 api_server.go:141] control plane version: v1.18.20
	I0116 22:53:13.588366   23993 api_server.go:131] duration metric: took 7.474354ms to wait for apiserver health ...
	I0116 22:53:13.588379   23993 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 22:53:13.764721   23993 request.go:629] Waited for 176.26935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0116 22:53:13.770321   23993 system_pods.go:59] 7 kube-system pods found
	I0116 22:53:13.770360   23993 system_pods.go:61] "coredns-66bff467f8-szh5r" [21d89063-38eb-43ae-9724-51a7c9422814] Running
	I0116 22:53:13.770365   23993 system_pods.go:61] "etcd-ingress-addon-legacy-264702" [a072c0bf-5c71-4c94-a603-8b2e80e8ff5d] Running
	I0116 22:53:13.770369   23993 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-264702" [5da11802-44f0-4933-8b48-b711478453dc] Running
	I0116 22:53:13.770373   23993 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-264702" [9baee616-988b-4b9b-942c-585d2fbe4a92] Running
	I0116 22:53:13.770377   23993 system_pods.go:61] "kube-proxy-c6wmq" [2927eae3-48e8-4375-a30a-2fd66f196661] Running
	I0116 22:53:13.770380   23993 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-264702" [97b8a75a-03e2-4914-9260-bcfeb9a63b67] Running
	I0116 22:53:13.770384   23993 system_pods.go:61] "storage-provisioner" [ba1518bd-de5a-4d8b-bd92-e4f62ff3b522] Running
	I0116 22:53:13.770389   23993 system_pods.go:74] duration metric: took 182.005648ms to wait for pod list to return data ...
	I0116 22:53:13.770396   23993 default_sa.go:34] waiting for default service account to be created ...
	I0116 22:53:13.964756   23993 request.go:629] Waited for 194.281842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0116 22:53:13.967735   23993 default_sa.go:45] found service account: "default"
	I0116 22:53:13.967758   23993 default_sa.go:55] duration metric: took 197.357006ms for default service account to be created ...
	I0116 22:53:13.967767   23993 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 22:53:14.165248   23993 request.go:629] Waited for 197.430364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0116 22:53:14.171040   23993 system_pods.go:86] 7 kube-system pods found
	I0116 22:53:14.171064   23993 system_pods.go:89] "coredns-66bff467f8-szh5r" [21d89063-38eb-43ae-9724-51a7c9422814] Running
	I0116 22:53:14.171070   23993 system_pods.go:89] "etcd-ingress-addon-legacy-264702" [a072c0bf-5c71-4c94-a603-8b2e80e8ff5d] Running
	I0116 22:53:14.171074   23993 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-264702" [5da11802-44f0-4933-8b48-b711478453dc] Running
	I0116 22:53:14.171078   23993 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-264702" [9baee616-988b-4b9b-942c-585d2fbe4a92] Running
	I0116 22:53:14.171082   23993 system_pods.go:89] "kube-proxy-c6wmq" [2927eae3-48e8-4375-a30a-2fd66f196661] Running
	I0116 22:53:14.171086   23993 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-264702" [97b8a75a-03e2-4914-9260-bcfeb9a63b67] Running
	I0116 22:53:14.171089   23993 system_pods.go:89] "storage-provisioner" [ba1518bd-de5a-4d8b-bd92-e4f62ff3b522] Running
	I0116 22:53:14.171095   23993 system_pods.go:126] duration metric: took 203.324093ms to wait for k8s-apps to be running ...
	I0116 22:53:14.171102   23993 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 22:53:14.171147   23993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 22:53:14.184323   23993 system_svc.go:56] duration metric: took 13.208445ms WaitForService to wait for kubelet.
	I0116 22:53:14.184365   23993 kubeadm.go:581] duration metric: took 4.363002403s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 22:53:14.184382   23993 node_conditions.go:102] verifying NodePressure condition ...
	I0116 22:53:14.365406   23993 request.go:629] Waited for 180.963594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes
	I0116 22:53:14.369277   23993 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 22:53:14.369302   23993 node_conditions.go:123] node cpu capacity is 2
	I0116 22:53:14.369312   23993 node_conditions.go:105] duration metric: took 184.924588ms to run NodePressure ...
	I0116 22:53:14.369323   23993 start.go:228] waiting for startup goroutines ...
	I0116 22:53:14.369328   23993 start.go:233] waiting for cluster config update ...
	I0116 22:53:14.369338   23993 start.go:242] writing updated cluster config ...
	I0116 22:53:14.369611   23993 ssh_runner.go:195] Run: rm -f paused
	I0116 22:53:14.414674   23993 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 22:53:14.416759   23993 out.go:177] 
	W0116 22:53:14.418409   23993 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 22:53:14.420084   23993 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 22:53:14.421824   23993 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-264702" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 22:52:18 UTC, ends at Tue 2024-01-16 22:56:31 UTC. --
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.396453419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445791396434994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=22a79ceb-ea12-4aed-aba2-b1eb6fee859a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.397511300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a166b169-5b84-42a8-983e-6f8c4aabc509 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.397656262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a166b169-5b84-42a8-983e-6f8c4aabc509 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.398203796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca310d9bdcb72c58e5a84556363b9f29a6d2aa93a629e6844434aa7af9bbb8da,PodSandboxId:c6bef26d6b01a970229393691b5f8331616639b0b3eb0961e769aca87afd9f9f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445776959485021,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bczxl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4979eb7-8ff9-4fbf-a28c-a2298963a3f9,},Annotations:map[string]string{io.kubernetes.container.hash: 646115a6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d491fb5cf0baffac57b7c189f810fa4950a6c6bd2bba5a6d4eda5486fa1850f,PodSandboxId:45537bb2da544430d6645d3925c971fc1611747d5eb60ad056814fca4ddc60ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705445636275628211,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5386b93a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692962f03b89feee21825f125123e4bd765b4be317ec0dc4bcb9f36a2e25c3b,PodSandboxId:9d99396129ca769c43c3f90576487c91228c12321b031fbae8e8d25e194bab62,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705445610301161953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rdf92,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 990e23d7-5eb2-4cdb-b187-1d814c3ee6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab56c180,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32a58264b1b43a4a8dbd5c38fb291f2770dd546664fa7ac770415e5d472948eb,PodSandboxId:50d524c001007a59118717752c1fe767cf18bc0210361257b289fc090a04c2af,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445600537285894,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fd4hw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4bd40cb-07a5-49c0-9007-4766c3da0f95,},Annotations:map[string]string{io.kubernetes.container.hash: 30a74d82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247c4fc73d487a7848049cb74e66408ac44ca0692fea980ffdb4ef39346bde31,PodSandboxId:8c5ef9f0ba920c2a2e95e99751d82100eb401268fa9a76640433eb29ea005d1d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445599494087116,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bf8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51e404c4-9bd1-4f5f-b37a-961369d484fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5108cc62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54329377d5d7e4c793176934dee4a08df409732fc91c94083955f901743ba67f,PodSandboxId:7ff58986d35b5923b0bb763f1ecd5e3bd62c3f322451d1b8580435d38e20aa51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705445591664888718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1518bd-de5a-4d8b-bd92-e4f62ff3b522,},Annotations:map[string]string{io.kubernetes.container.hash: 2840093c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83987b5c271ce099b8eb3c0e8bb8225f93487c75e419a2336ca654e54761b852,PodSandboxId:73e7f05a4c7e3d7454b53a1800114f88389b6e426bc137e6db88efadbbc8abeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705445590935270508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-szh5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d89063-38eb-43ae-9724-51a7c9422814,},Annotations:map[string]string{io.kubernetes.container.hash: 591bce03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa0a5ebfe5d80856982481246b1c
1af1adfa05ef8074aa46a0f48e0de53d9358,PodSandboxId:263277dcd53970763ce54ddf5264e7faf3abe023d0f3a7c15a368fe380e0311c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705445590311286768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c6wmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2927eae3-48e8-4375-a30a-2fd66f196661,},Annotations:map[string]string{io.kubernetes.container.hash: 4fb554c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72dce532b95a3f7e892adf10c39c344ab05a71ba9bd08b054260166ab604ca8,Pod
SandboxId:1169a462a19061b6d59c21c13e1a89e97f7b4a394fe34987d61525ed674d1f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705445565566289440,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f52527afa477c17a40ebd6a4788001b,},Annotations:map[string]string{io.kubernetes.container.hash: e358db0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d79d40c8beca99333df9b99ffedfb9ea1ec0379b494f24eafc6511391f5606b,PodSandboxId:e0a7505b5569ef644bb230e103ff58889fa3
a87db5a0847ae8be9168901c6702,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705445564302751016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff073d23a1e9a72b2fb28433af0853e9a4446d16103eb08bbc9717d000ccc,PodSandboxId:7c88e6350ff4dab440f03537d57f66608eccbbfcbe
c299428e3346103741218e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705445564071290703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76ab744d358cd61f7933bdcf2abe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 663c749f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c171611a0af51bfa5c7c48d065264c7924eda9213dcbd8e3ab1a35ebf859c8,PodSandboxId:6ebc06ac3c311dea4d895361f631385862e4d0a9d32c7b1d
ce50054a62e90667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705445564002125067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a166b169-5b84-42a8-983e-6f8c4aabc509 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.437917600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a90f852a-8113-4ea3-ba20-2ca7aef9c5f7 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.437975837Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a90f852a-8113-4ea3-ba20-2ca7aef9c5f7 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.439850704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c06c3d51-c455-43f2-afc7-bb3b74b9085c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.440317247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445791440302742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=c06c3d51-c455-43f2-afc7-bb3b74b9085c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.441070717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5b82ba8-36b7-4317-9e2d-9aafbd572988 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.441120691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5b82ba8-36b7-4317-9e2d-9aafbd572988 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.441360711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca310d9bdcb72c58e5a84556363b9f29a6d2aa93a629e6844434aa7af9bbb8da,PodSandboxId:c6bef26d6b01a970229393691b5f8331616639b0b3eb0961e769aca87afd9f9f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445776959485021,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bczxl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4979eb7-8ff9-4fbf-a28c-a2298963a3f9,},Annotations:map[string]string{io.kubernetes.container.hash: 646115a6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d491fb5cf0baffac57b7c189f810fa4950a6c6bd2bba5a6d4eda5486fa1850f,PodSandboxId:45537bb2da544430d6645d3925c971fc1611747d5eb60ad056814fca4ddc60ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705445636275628211,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5386b93a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692962f03b89feee21825f125123e4bd765b4be317ec0dc4bcb9f36a2e25c3b,PodSandboxId:9d99396129ca769c43c3f90576487c91228c12321b031fbae8e8d25e194bab62,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705445610301161953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rdf92,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 990e23d7-5eb2-4cdb-b187-1d814c3ee6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab56c180,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32a58264b1b43a4a8dbd5c38fb291f2770dd546664fa7ac770415e5d472948eb,PodSandboxId:50d524c001007a59118717752c1fe767cf18bc0210361257b289fc090a04c2af,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445600537285894,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fd4hw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4bd40cb-07a5-49c0-9007-4766c3da0f95,},Annotations:map[string]string{io.kubernetes.container.hash: 30a74d82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247c4fc73d487a7848049cb74e66408ac44ca0692fea980ffdb4ef39346bde31,PodSandboxId:8c5ef9f0ba920c2a2e95e99751d82100eb401268fa9a76640433eb29ea005d1d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445599494087116,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bf8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51e404c4-9bd1-4f5f-b37a-961369d484fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5108cc62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54329377d5d7e4c793176934dee4a08df409732fc91c94083955f901743ba67f,PodSandboxId:7ff58986d35b5923b0bb763f1ecd5e3bd62c3f322451d1b8580435d38e20aa51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705445591664888718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1518bd-de5a-4d8b-bd92-e4f62ff3b522,},Annotations:map[string]string{io.kubernetes.container.hash: 2840093c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83987b5c271ce099b8eb3c0e8bb8225f93487c75e419a2336ca654e54761b852,PodSandboxId:73e7f05a4c7e3d7454b53a1800114f88389b6e426bc137e6db88efadbbc8abeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705445590935270508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-szh5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d89063-38eb-43ae-9724-51a7c9422814,},Annotations:map[string]string{io.kubernetes.container.hash: 591bce03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa0a5ebfe5d80856982481246b1c
1af1adfa05ef8074aa46a0f48e0de53d9358,PodSandboxId:263277dcd53970763ce54ddf5264e7faf3abe023d0f3a7c15a368fe380e0311c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705445590311286768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c6wmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2927eae3-48e8-4375-a30a-2fd66f196661,},Annotations:map[string]string{io.kubernetes.container.hash: 4fb554c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72dce532b95a3f7e892adf10c39c344ab05a71ba9bd08b054260166ab604ca8,Pod
SandboxId:1169a462a19061b6d59c21c13e1a89e97f7b4a394fe34987d61525ed674d1f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705445565566289440,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f52527afa477c17a40ebd6a4788001b,},Annotations:map[string]string{io.kubernetes.container.hash: e358db0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d79d40c8beca99333df9b99ffedfb9ea1ec0379b494f24eafc6511391f5606b,PodSandboxId:e0a7505b5569ef644bb230e103ff58889fa3
a87db5a0847ae8be9168901c6702,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705445564302751016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff073d23a1e9a72b2fb28433af0853e9a4446d16103eb08bbc9717d000ccc,PodSandboxId:7c88e6350ff4dab440f03537d57f66608eccbbfcbe
c299428e3346103741218e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705445564071290703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76ab744d358cd61f7933bdcf2abe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 663c749f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c171611a0af51bfa5c7c48d065264c7924eda9213dcbd8e3ab1a35ebf859c8,PodSandboxId:6ebc06ac3c311dea4d895361f631385862e4d0a9d32c7b1d
ce50054a62e90667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705445564002125067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5b82ba8-36b7-4317-9e2d-9aafbd572988 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.478526655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c4511785-c157-4208-86f4-2d75041a341b name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.478636936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c4511785-c157-4208-86f4-2d75041a341b name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.480184215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b1f569c2-54b7-43a5-ade2-093a96069ff9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.480803219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445791480785442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=b1f569c2-54b7-43a5-ade2-093a96069ff9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.481293105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e726413-2049-4ea8-a070-dd7002dbaddf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.481342268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e726413-2049-4ea8-a070-dd7002dbaddf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.481677343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca310d9bdcb72c58e5a84556363b9f29a6d2aa93a629e6844434aa7af9bbb8da,PodSandboxId:c6bef26d6b01a970229393691b5f8331616639b0b3eb0961e769aca87afd9f9f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445776959485021,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bczxl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4979eb7-8ff9-4fbf-a28c-a2298963a3f9,},Annotations:map[string]string{io.kubernetes.container.hash: 646115a6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d491fb5cf0baffac57b7c189f810fa4950a6c6bd2bba5a6d4eda5486fa1850f,PodSandboxId:45537bb2da544430d6645d3925c971fc1611747d5eb60ad056814fca4ddc60ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705445636275628211,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5386b93a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692962f03b89feee21825f125123e4bd765b4be317ec0dc4bcb9f36a2e25c3b,PodSandboxId:9d99396129ca769c43c3f90576487c91228c12321b031fbae8e8d25e194bab62,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705445610301161953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rdf92,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 990e23d7-5eb2-4cdb-b187-1d814c3ee6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab56c180,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32a58264b1b43a4a8dbd5c38fb291f2770dd546664fa7ac770415e5d472948eb,PodSandboxId:50d524c001007a59118717752c1fe767cf18bc0210361257b289fc090a04c2af,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445600537285894,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fd4hw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4bd40cb-07a5-49c0-9007-4766c3da0f95,},Annotations:map[string]string{io.kubernetes.container.hash: 30a74d82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247c4fc73d487a7848049cb74e66408ac44ca0692fea980ffdb4ef39346bde31,PodSandboxId:8c5ef9f0ba920c2a2e95e99751d82100eb401268fa9a76640433eb29ea005d1d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445599494087116,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bf8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51e404c4-9bd1-4f5f-b37a-961369d484fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5108cc62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54329377d5d7e4c793176934dee4a08df409732fc91c94083955f901743ba67f,PodSandboxId:7ff58986d35b5923b0bb763f1ecd5e3bd62c3f322451d1b8580435d38e20aa51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705445591664888718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1518bd-de5a-4d8b-bd92-e4f62ff3b522,},Annotations:map[string]string{io.kubernetes.container.hash: 2840093c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83987b5c271ce099b8eb3c0e8bb8225f93487c75e419a2336ca654e54761b852,PodSandboxId:73e7f05a4c7e3d7454b53a1800114f88389b6e426bc137e6db88efadbbc8abeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705445590935270508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-szh5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d89063-38eb-43ae-9724-51a7c9422814,},Annotations:map[string]string{io.kubernetes.container.hash: 591bce03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa0a5ebfe5d80856982481246b1c
1af1adfa05ef8074aa46a0f48e0de53d9358,PodSandboxId:263277dcd53970763ce54ddf5264e7faf3abe023d0f3a7c15a368fe380e0311c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705445590311286768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c6wmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2927eae3-48e8-4375-a30a-2fd66f196661,},Annotations:map[string]string{io.kubernetes.container.hash: 4fb554c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72dce532b95a3f7e892adf10c39c344ab05a71ba9bd08b054260166ab604ca8,Pod
SandboxId:1169a462a19061b6d59c21c13e1a89e97f7b4a394fe34987d61525ed674d1f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705445565566289440,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f52527afa477c17a40ebd6a4788001b,},Annotations:map[string]string{io.kubernetes.container.hash: e358db0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d79d40c8beca99333df9b99ffedfb9ea1ec0379b494f24eafc6511391f5606b,PodSandboxId:e0a7505b5569ef644bb230e103ff58889fa3
a87db5a0847ae8be9168901c6702,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705445564302751016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff073d23a1e9a72b2fb28433af0853e9a4446d16103eb08bbc9717d000ccc,PodSandboxId:7c88e6350ff4dab440f03537d57f66608eccbbfcbe
c299428e3346103741218e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705445564071290703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76ab744d358cd61f7933bdcf2abe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 663c749f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c171611a0af51bfa5c7c48d065264c7924eda9213dcbd8e3ab1a35ebf859c8,PodSandboxId:6ebc06ac3c311dea4d895361f631385862e4d0a9d32c7b1d
ce50054a62e90667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705445564002125067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e726413-2049-4ea8-a070-dd7002dbaddf name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.522334771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5cd77a91-be0a-4706-96f4-e9574b5738f2 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.522418110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5cd77a91-be0a-4706-96f4-e9574b5738f2 name=/runtime.v1.RuntimeService/Version
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.523984738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3d781552-a89c-49e4-b657-beaad0fe8c87 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.524488079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705445791524473094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=3d781552-a89c-49e4-b657-beaad0fe8c87 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.525117898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ccb67ce5-0ee1-484e-9d0c-f778aae374d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.525184839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ccb67ce5-0ee1-484e-9d0c-f778aae374d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 22:56:31 ingress-addon-legacy-264702 crio[720]: time="2024-01-16 22:56:31.525416853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca310d9bdcb72c58e5a84556363b9f29a6d2aa93a629e6844434aa7af9bbb8da,PodSandboxId:c6bef26d6b01a970229393691b5f8331616639b0b3eb0961e769aca87afd9f9f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705445776959485021,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-bczxl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4979eb7-8ff9-4fbf-a28c-a2298963a3f9,},Annotations:map[string]string{io.kubernetes.container.hash: 646115a6,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d491fb5cf0baffac57b7c189f810fa4950a6c6bd2bba5a6d4eda5486fa1850f,PodSandboxId:45537bb2da544430d6645d3925c971fc1611747d5eb60ad056814fca4ddc60ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705445636275628211,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5386b93a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692962f03b89feee21825f125123e4bd765b4be317ec0dc4bcb9f36a2e25c3b,PodSandboxId:9d99396129ca769c43c3f90576487c91228c12321b031fbae8e8d25e194bab62,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705445610301161953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rdf92,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 990e23d7-5eb2-4cdb-b187-1d814c3ee6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab56c180,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32a58264b1b43a4a8dbd5c38fb291f2770dd546664fa7ac770415e5d472948eb,PodSandboxId:50d524c001007a59118717752c1fe767cf18bc0210361257b289fc090a04c2af,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445600537285894,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fd4hw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4bd40cb-07a5-49c0-9007-4766c3da0f95,},Annotations:map[string]string{io.kubernetes.container.hash: 30a74d82,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247c4fc73d487a7848049cb74e66408ac44ca0692fea980ffdb4ef39346bde31,PodSandboxId:8c5ef9f0ba920c2a2e95e99751d82100eb401268fa9a76640433eb29ea005d1d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705445599494087116,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bf8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51e404c4-9bd1-4f5f-b37a-961369d484fe,},Annotations:map[string]string{io.kubernetes.container.hash: 5108cc62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54329377d5d7e4c793176934dee4a08df409732fc91c94083955f901743ba67f,PodSandboxId:7ff58986d35b5923b0bb763f1ecd5e3bd62c3f322451d1b8580435d38e20aa51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705445591664888718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1518bd-de5a-4d8b-bd92-e4f62ff3b522,},Annotations:map[string]string{io.kubernetes.container.hash: 2840093c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83987b5c271ce099b8eb3c0e8bb8225f93487c75e419a2336ca654e54761b852,PodSandboxId:73e7f05a4c7e3d7454b53a1800114f88389b6e426bc137e6db88efadbbc8abeb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705445590935270508,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-szh5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d89063-38eb-43ae-9724-51a7c9422814,},Annotations:map[string]string{io.kubernetes.container.hash: 591bce03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa0a5ebfe5d80856982481246b1c
1af1adfa05ef8074aa46a0f48e0de53d9358,PodSandboxId:263277dcd53970763ce54ddf5264e7faf3abe023d0f3a7c15a368fe380e0311c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705445590311286768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c6wmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2927eae3-48e8-4375-a30a-2fd66f196661,},Annotations:map[string]string{io.kubernetes.container.hash: 4fb554c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72dce532b95a3f7e892adf10c39c344ab05a71ba9bd08b054260166ab604ca8,Pod
SandboxId:1169a462a19061b6d59c21c13e1a89e97f7b4a394fe34987d61525ed674d1f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705445565566289440,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f52527afa477c17a40ebd6a4788001b,},Annotations:map[string]string{io.kubernetes.container.hash: e358db0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d79d40c8beca99333df9b99ffedfb9ea1ec0379b494f24eafc6511391f5606b,PodSandboxId:e0a7505b5569ef644bb230e103ff58889fa3
a87db5a0847ae8be9168901c6702,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705445564302751016,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caff073d23a1e9a72b2fb28433af0853e9a4446d16103eb08bbc9717d000ccc,PodSandboxId:7c88e6350ff4dab440f03537d57f66608eccbbfcbe
c299428e3346103741218e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705445564071290703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76ab744d358cd61f7933bdcf2abe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 663c749f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c171611a0af51bfa5c7c48d065264c7924eda9213dcbd8e3ab1a35ebf859c8,PodSandboxId:6ebc06ac3c311dea4d895361f631385862e4d0a9d32c7b1d
ce50054a62e90667,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705445564002125067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-264702,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ccb67ce5-0ee1-484e-9d0c-f778aae374d1 name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca310d9bdcb72       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            14 seconds ago      Running             hello-world-app           0                   c6bef26d6b01a       hello-world-app-5f5d8b66bb-bczxl
	4d491fb5cf0ba       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   45537bb2da544       nginx
	9692962f03b89       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   9d99396129ca7       ingress-nginx-controller-7fcf777cb7-rdf92
	32a58264b1b43       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   50d524c001007       ingress-nginx-admission-patch-fd4hw
	247c4fc73d487       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   8c5ef9f0ba920       ingress-nginx-admission-create-bf8dl
	54329377d5d7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   7ff58986d35b5       storage-provisioner
	83987b5c271ce       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   73e7f05a4c7e3       coredns-66bff467f8-szh5r
	fa0a5ebfe5d80       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   263277dcd5397       kube-proxy-c6wmq
	c72dce532b95a       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   1169a462a1906       etcd-ingress-addon-legacy-264702
	3d79d40c8beca       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   e0a7505b5569e       kube-scheduler-ingress-addon-legacy-264702
	6caff073d23a1       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   7c88e6350ff4d       kube-apiserver-ingress-addon-legacy-264702
	e5c171611a0af       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   6ebc06ac3c311       kube-controller-manager-ingress-addon-legacy-264702
	
	
	==> coredns [83987b5c271ce099b8eb3c0e8bb8225f93487c75e419a2336ca654e54761b852] <==
	[INFO] 10.244.0.5:49654 - 65519 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000225671s
	[INFO] 10.244.0.5:49654 - 37752 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083486s
	[INFO] 10.244.0.5:55692 - 4062 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00034813s
	[INFO] 10.244.0.5:55692 - 34083 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077661s
	[INFO] 10.244.0.5:49654 - 30248 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044158s
	[INFO] 10.244.0.5:49654 - 22623 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123354s
	[INFO] 10.244.0.5:55692 - 13884 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001128692s
	[INFO] 10.244.0.5:49654 - 45137 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000444516s
	[INFO] 10.244.0.5:55692 - 62823 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000200471s
	[INFO] 10.244.0.5:49654 - 49651 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000458963s
	[INFO] 10.244.0.5:55692 - 49462 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000156937s
	[INFO] 10.244.0.5:45743 - 2219 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076463s
	[INFO] 10.244.0.5:36264 - 27772 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056022s
	[INFO] 10.244.0.5:45743 - 15354 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033746s
	[INFO] 10.244.0.5:45743 - 15736 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031256s
	[INFO] 10.244.0.5:45743 - 1813 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032672s
	[INFO] 10.244.0.5:45743 - 5538 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040391s
	[INFO] 10.244.0.5:45743 - 28852 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035405s
	[INFO] 10.244.0.5:45743 - 8804 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000210933s
	[INFO] 10.244.0.5:36264 - 49171 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066579s
	[INFO] 10.244.0.5:36264 - 24639 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063067s
	[INFO] 10.244.0.5:36264 - 59745 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093905s
	[INFO] 10.244.0.5:36264 - 52897 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000251057s
	[INFO] 10.244.0.5:36264 - 2581 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00023156s
	[INFO] 10.244.0.5:36264 - 2417 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000161336s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-264702
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-264702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=ingress-addon-legacy-264702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T22_52_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 22:52:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-264702
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 22:56:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 22:56:23 +0000   Tue, 16 Jan 2024 22:52:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 22:56:23 +0000   Tue, 16 Jan 2024 22:52:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 22:56:23 +0000   Tue, 16 Jan 2024 22:52:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 22:56:23 +0000   Tue, 16 Jan 2024 22:53:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ingress-addon-legacy-264702
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 072b625ee584481aa037e9c55614f5bf
	  System UUID:                072b625e-e584-481a-a037-e9c55614f5bf
	  Boot ID:                    7265b8b1-7134-4f4d-b9e7-be6a5340e621
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-bczxl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-66bff467f8-szh5r                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m23s
	  kube-system                 etcd-ingress-addon-legacy-264702                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-264702             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-264702    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-c6wmq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-scheduler-ingress-addon-legacy-264702             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m49s (x5 over 3m49s)  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x5 over 3m49s)  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s                  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s                  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s                  kubelet     Node ingress-addon-legacy-264702 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m28s                  kubelet     Node ingress-addon-legacy-264702 status is now: NodeReady
	  Normal  Starting                 3m21s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 22:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.085156] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.321604] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.683926] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130941] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.278562] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.062037] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.104559] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.147089] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.114588] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.199494] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +7.760145] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +3.172142] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.902865] systemd-fstab-generator[1434]: Ignoring "noauto" for root device
	[Jan16 22:53] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.113727] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.655178] kauditd_printk_skb: 4 callbacks suppressed
	[ +28.235792] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.265049] kauditd_printk_skb: 3 callbacks suppressed
	[Jan16 22:56] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [c72dce532b95a3f7e892adf10c39c344ab05a71ba9bd08b054260166ab604ca8] <==
	raft2024/01/16 22:52:45 INFO: dda2c3e6a900b50e became follower at term 0
	raft2024/01/16 22:52:45 INFO: newRaft dda2c3e6a900b50e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/16 22:52:45 INFO: dda2c3e6a900b50e became follower at term 1
	raft2024/01/16 22:52:45 INFO: dda2c3e6a900b50e switched to configuration voters=(15970542624054490382)
	2024-01-16 22:52:45.703829 W | auth: simple token is not cryptographically signed
	2024-01-16 22:52:45.707744 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 22:52:45.712823 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 22:52:45.712985 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 22:52:45.713119 I | embed: listening for peers on 192.168.39.47:2380
	2024-01-16 22:52:45.713185 I | etcdserver: dda2c3e6a900b50e as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 22:52:45 INFO: dda2c3e6a900b50e switched to configuration voters=(15970542624054490382)
	2024-01-16 22:52:45.713493 I | etcdserver/membership: added member dda2c3e6a900b50e [https://192.168.39.47:2380] to cluster f4536840deabf9cf
	raft2024/01/16 22:52:46 INFO: dda2c3e6a900b50e is starting a new election at term 1
	raft2024/01/16 22:52:46 INFO: dda2c3e6a900b50e became candidate at term 2
	raft2024/01/16 22:52:46 INFO: dda2c3e6a900b50e received MsgVoteResp from dda2c3e6a900b50e at term 2
	raft2024/01/16 22:52:46 INFO: dda2c3e6a900b50e became leader at term 2
	raft2024/01/16 22:52:46 INFO: raft.node: dda2c3e6a900b50e elected leader dda2c3e6a900b50e at term 2
	2024-01-16 22:52:46.597739 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 22:52:46.599537 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 22:52:46.599684 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 22:52:46.599897 I | etcdserver: published {Name:ingress-addon-legacy-264702 ClientURLs:[https://192.168.39.47:2379]} to cluster f4536840deabf9cf
	2024-01-16 22:52:46.600006 I | embed: ready to serve client requests
	2024-01-16 22:52:46.600160 I | embed: ready to serve client requests
	2024-01-16 22:52:46.601266 I | embed: serving client requests on 192.168.39.47:2379
	2024-01-16 22:52:46.601323 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 22:56:31 up 4 min,  0 users,  load average: 1.10, 0.51, 0.21
	Linux ingress-addon-legacy-264702 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6caff073d23a1e9a72b2fb28433af0853e9a4446d16103eb08bbc9717d000ccc] <==
	E0116 22:52:49.511353       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.47, ResourceVersion: 0, AdditionalErrorMsg: 
	I0116 22:52:49.555933       1 cache.go:39] Caches are synced for autoregister controller
	I0116 22:52:49.570521       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0116 22:52:49.570642       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 22:52:49.570673       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 22:52:49.570762       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 22:52:50.451646       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 22:52:50.451782       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 22:52:50.463743       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 22:52:50.469227       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 22:52:50.469268       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 22:52:50.894800       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 22:52:50.938194       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 22:52:51.082107       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.47]
	I0116 22:52:51.083121       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 22:52:51.086828       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 22:52:51.833850       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 22:52:52.720969       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 22:52:52.869491       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 22:52:53.052901       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 22:53:08.772295       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 22:53:09.151003       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 22:53:15.230989       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 22:53:49.732382       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0116 22:56:24.073471       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [e5c171611a0af51bfa5c7c48d065264c7924eda9213dcbd8e3ab1a35ebf859c8] <==
	I0116 22:53:09.139155       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-264702", UID:"bada09a7-7ee3-4228-b3c7-addaafae0f58", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-264702 event: Registered Node ingress-addon-legacy-264702 in Controller
	I0116 22:53:09.139363       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0116 22:53:09.147399       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0116 22:53:09.172224       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1e317db4-24d0-430c-bb60-3a5998b4f845", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-c6wmq
	E0116 22:53:09.211636       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"1e317db4-24d0-430c-bb60-3a5998b4f845", ResourceVersion:"213", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63841042372, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00150c900), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc00150c940)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00150c960), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013d25c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc00150c980), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00150c9a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00150c9e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0015008c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001630668), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00011a770), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000b2940)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016306b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0116 22:53:09.237262       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0116 22:53:09.278006       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 22:53:09.284942       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 22:53:09.291937       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 22:53:09.291995       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 22:53:09.312448       1 shared_informer.go:230] Caches are synced for disruption 
	I0116 22:53:09.312490       1 disruption.go:339] Sending events to api server.
	I0116 22:53:09.335650       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3ffbaac9-419b-4486-a553-694fb6a31552", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 22:53:09.346413       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 22:53:09.431975       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4b131bf9-3ed3-45cc-bf91-4c69aba5fd21", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-7mpxw
	I0116 22:53:15.179361       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"924d1247-9236-4d35-b7e4-253bdc94a68c", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 22:53:15.213783       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"65e0e721-c2c5-4b0a-9ecd-ebdbe4be4f5a", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-rdf92
	I0116 22:53:15.254150       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"106a2db8-3121-4ff7-ada3-18d62c874058", APIVersion:"batch/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-bf8dl
	I0116 22:53:15.308316       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fa58b14d-e3f3-4520-8ac6-0bca3379d724", APIVersion:"batch/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-fd4hw
	I0116 22:53:20.262559       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"106a2db8-3121-4ff7-ada3-18d62c874058", APIVersion:"batch/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 22:53:21.269104       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fa58b14d-e3f3-4520-8ac6-0bca3379d724", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 22:56:13.417940       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5198aeb5-3d33-4c01-827c-e984cbce81a5", APIVersion:"apps/v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 22:56:13.446145       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"17bea6c1-29c0-4bea-82e1-e12dd4d0c4bb", APIVersion:"apps/v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-bczxl
	E0116 22:56:28.682246       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-27ffs" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [fa0a5ebfe5d80856982481246b1c1af1adfa05ef8074aa46a0f48e0de53d9358] <==
	W0116 22:53:10.462978       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 22:53:10.493037       1 node.go:136] Successfully retrieved node IP: 192.168.39.47
	I0116 22:53:10.493158       1 server_others.go:186] Using iptables Proxier.
	I0116 22:53:10.493698       1 server.go:583] Version: v1.18.20
	I0116 22:53:10.498736       1 config.go:315] Starting service config controller
	I0116 22:53:10.498792       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 22:53:10.500260       1 config.go:133] Starting endpoints config controller
	I0116 22:53:10.500309       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 22:53:10.598998       1 shared_informer.go:230] Caches are synced for service config 
	I0116 22:53:10.600611       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [3d79d40c8beca99333df9b99ffedfb9ea1ec0379b494f24eafc6511391f5606b] <==
	I0116 22:52:49.548017       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 22:52:49.553328       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 22:52:49.550664       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0116 22:52:49.558864       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 22:52:49.558974       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 22:52:49.559087       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 22:52:49.559169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 22:52:49.559239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 22:52:49.559313       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 22:52:49.559389       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 22:52:49.559456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 22:52:49.559517       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 22:52:49.559657       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 22:52:49.559735       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 22:52:49.561955       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 22:52:50.414764       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 22:52:50.437112       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 22:52:50.455664       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 22:52:50.518825       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 22:52:50.677340       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 22:52:50.712404       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 22:52:50.723884       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 22:52:50.736057       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0116 22:52:53.053537       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 22:53:08.950773       1 factory.go:503] pod: kube-system/coredns-66bff467f8-szh5r is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 22:52:18 UTC, ends at Tue 2024-01-16 22:56:32 UTC. --
	Jan 16 22:53:31 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:53:31.422217    1441 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 22:53:31 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:53:31.552223    1441 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-6w4rt" (UniqueName: "kubernetes.io/secret/a116063d-1549-403d-8c62-a2291a485134-minikube-ingress-dns-token-6w4rt") pod "kube-ingress-dns-minikube" (UID: "a116063d-1549-403d-8c62-a2291a485134")
	Jan 16 22:53:49 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:53:49.902715    1441 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 22:53:49 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:53:49.906861    1441 reflector.go:178] object-"default"/"default-token-hcwf2": Failed to list *v1.Secret: secrets "default-token-hcwf2" is forbidden: User "system:node:ingress-addon-legacy-264702" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-264702" and this object
	Jan 16 22:53:49 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:53:49.909004    1441 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hcwf2" (UniqueName: "kubernetes.io/secret/dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7-default-token-hcwf2") pod "nginx" (UID: "dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7")
	Jan 16 22:53:51 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:53:51.009527    1441 secret.go:195] Couldn't get secret default/default-token-hcwf2: failed to sync secret cache: timed out waiting for the condition
	Jan 16 22:53:51 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:53:51.009825    1441 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7-default-token-hcwf2 podName:dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7 nodeName:}" failed. No retries permitted until 2024-01-16 22:53:51.509796564 +0000 UTC m=+58.858919033 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-hcwf2\" (UniqueName: \"kubernetes.io/secret/dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7-default-token-hcwf2\") pod \"nginx\" (UID: \"dee7eed1-0b9d-48a7-a8e5-1cb97c5a8bd7\") : failed to sync secret cache: timed out waiting for the condition"
	Jan 16 22:56:13 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:13.459358    1441 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 22:56:13 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:13.556870    1441 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hcwf2" (UniqueName: "kubernetes.io/secret/d4979eb7-8ff9-4fbf-a28c-a2298963a3f9-default-token-hcwf2") pod "hello-world-app-5f5d8b66bb-bczxl" (UID: "d4979eb7-8ff9-4fbf-a28c-a2298963a3f9")
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:15.169766    1441 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ec1269b1ae661f760257e0969306ad2f5d1797d6f7a86680ff283c93be7b71ab
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:15.201767    1441 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ec1269b1ae661f760257e0969306ad2f5d1797d6f7a86680ff283c93be7b71ab
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:56:15.202271    1441 remote_runtime.go:295] ContainerStatus "ec1269b1ae661f760257e0969306ad2f5d1797d6f7a86680ff283c93be7b71ab" from runtime service failed: rpc error: code = NotFound desc = could not find container "ec1269b1ae661f760257e0969306ad2f5d1797d6f7a86680ff283c93be7b71ab": container with ID starting with ec1269b1ae661f760257e0969306ad2f5d1797d6f7a86680ff283c93be7b71ab not found: ID does not exist
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:15.263321    1441 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6w4rt" (UniqueName: "kubernetes.io/secret/a116063d-1549-403d-8c62-a2291a485134-minikube-ingress-dns-token-6w4rt") pod "a116063d-1549-403d-8c62-a2291a485134" (UID: "a116063d-1549-403d-8c62-a2291a485134")
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:15.269668    1441 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a116063d-1549-403d-8c62-a2291a485134-minikube-ingress-dns-token-6w4rt" (OuterVolumeSpecName: "minikube-ingress-dns-token-6w4rt") pod "a116063d-1549-403d-8c62-a2291a485134" (UID: "a116063d-1549-403d-8c62-a2291a485134"). InnerVolumeSpecName "minikube-ingress-dns-token-6w4rt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 22:56:15 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:15.363668    1441 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6w4rt" (UniqueName: "kubernetes.io/secret/a116063d-1549-403d-8c62-a2291a485134-minikube-ingress-dns-token-6w4rt") on node "ingress-addon-legacy-264702" DevicePath ""
	Jan 16 22:56:24 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:56:24.052784    1441 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rdf92.17aaf5e79e0ff605", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rdf92", UID:"990e23d7-5eb2-4cdb-b187-1d814c3ee6b1", APIVersion:"v1", ResourceVersion:"447", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-264702"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc161e046030e0605, ext:211400373159, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc161e046030e0605, ext:211400373159, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rdf92.17aaf5e79e0ff605" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 22:56:24 ingress-addon-legacy-264702 kubelet[1441]: E0116 22:56:24.071703    1441 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rdf92.17aaf5e79e0ff605", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rdf92", UID:"990e23d7-5eb2-4cdb-b187-1d814c3ee6b1", APIVersion:"v1", ResourceVersion:"447", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-264702"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc161e046030e0605, ext:211400373159, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc161e04603fc7e44, ext:211416001501, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rdf92.17aaf5e79e0ff605" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 22:56:27 ingress-addon-legacy-264702 kubelet[1441]: W0116 22:56:27.269855    1441 pod_container_deletor.go:77] Container "9d99396129ca769c43c3f90576487c91228c12321b031fbae8e8d25e194bab62" not found in pod's containers
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.204993    1441 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-tc884" (UniqueName: "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-ingress-nginx-token-tc884") pod "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1" (UID: "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1")
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.205046    1441 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-webhook-cert") pod "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1" (UID: "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1")
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.214730    1441 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1" (UID: "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.215339    1441 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-ingress-nginx-token-tc884" (OuterVolumeSpecName: "ingress-nginx-token-tc884") pod "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1" (UID: "990e23d7-5eb2-4cdb-b187-1d814c3ee6b1"). InnerVolumeSpecName "ingress-nginx-token-tc884". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.305374    1441 reconciler.go:319] Volume detached for volume "ingress-nginx-token-tc884" (UniqueName: "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-ingress-nginx-token-tc884") on node "ingress-addon-legacy-264702" DevicePath ""
	Jan 16 22:56:28 ingress-addon-legacy-264702 kubelet[1441]: I0116 22:56:28.305407    1441 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1-webhook-cert") on node "ingress-addon-legacy-264702" DevicePath ""
	Jan 16 22:56:29 ingress-addon-legacy-264702 kubelet[1441]: W0116 22:56:29.145515    1441 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/990e23d7-5eb2-4cdb-b187-1d814c3ee6b1/volumes" does not exist
	
	
	==> storage-provisioner [54329377d5d7e4c793176934dee4a08df409732fc91c94083955f901743ba67f] <==
	I0116 22:53:11.761945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 22:53:11.773382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 22:53:11.773531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 22:53:11.784918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 22:53:11.785071       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-264702_92060861-9e6e-4ff3-a805-9ded5fca40b0!
	I0116 22:53:11.786082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73948eef-5f1a-417f-aa0a-fbdbf3c719a2", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-264702_92060861-9e6e-4ff3-a805-9ded5fca40b0 became leader
	I0116 22:53:11.886207       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-264702_92060861-9e6e-4ff3-a805-9ded5fca40b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-264702 -n ingress-addon-legacy-264702
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-264702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328490
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-328490
E0116 23:05:47.135954   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:06:00.967850   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:07:10.182237   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-328490: exit status 82 (2m1.241386075s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328490"  ...
	* Stopping node "multinode-328490"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-328490" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328490 --wait=true -v=8 --alsologtostderr
E0116 23:08:31.442467   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:10:47.136080   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:11:00.968384   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:12:24.013062   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:13:31.442458   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:14:54.488017   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:15:47.136161   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:16:00.968260   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328490 --wait=true -v=8 --alsologtostderr: (9m26.070469795s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328490
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328490 -n multinode-328490
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-328490 logs -n 25: (1.484552982s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile870702025/001/cp-test_multinode-328490-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490:/home/docker/cp-test_multinode-328490-m02_multinode-328490.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n multinode-328490 sudo cat                                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /home/docker/cp-test_multinode-328490-m02_multinode-328490.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03:/home/docker/cp-test_multinode-328490-m02_multinode-328490-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n multinode-328490-m03 sudo cat                                   | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /home/docker/cp-test_multinode-328490-m02_multinode-328490-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp testdata/cp-test.txt                                                | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile870702025/001/cp-test_multinode-328490-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490:/home/docker/cp-test_multinode-328490-m03_multinode-328490.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n multinode-328490 sudo cat                                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /home/docker/cp-test_multinode-328490-m03_multinode-328490.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt                       | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m02:/home/docker/cp-test_multinode-328490-m03_multinode-328490-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n                                                                 | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | multinode-328490-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328490 ssh -n multinode-328490-m02 sudo cat                                   | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	|         | /home/docker/cp-test_multinode-328490-m03_multinode-328490-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-328490 node stop m03                                                          | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:04 UTC |
	| node    | multinode-328490 node start                                                             | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:04 UTC | 16 Jan 24 23:05 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-328490                                                                | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:05 UTC |                     |
	| stop    | -p multinode-328490                                                                     | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:05 UTC |                     |
	| start   | -p multinode-328490                                                                     | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:07 UTC | 16 Jan 24 23:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-328490                                                                | multinode-328490 | jenkins | v1.32.0 | 16 Jan 24 23:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:07:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:07:23.892780   31467 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:07:23.893019   31467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:07:23.893027   31467 out.go:309] Setting ErrFile to fd 2...
	I0116 23:07:23.893032   31467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:07:23.893217   31467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:07:23.893760   31467 out.go:303] Setting JSON to false
	I0116 23:07:23.894664   31467 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2990,"bootTime":1705443454,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:07:23.894725   31467 start.go:138] virtualization: kvm guest
	I0116 23:07:23.897046   31467 out.go:177] * [multinode-328490] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:07:23.898587   31467 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:07:23.898530   31467 notify.go:220] Checking for updates...
	I0116 23:07:23.899897   31467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:07:23.901331   31467 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:07:23.902586   31467 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:07:23.903717   31467 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:07:23.904885   31467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:07:23.906568   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:07:23.906655   31467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:07:23.907030   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:07:23.907089   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:07:23.921181   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
	I0116 23:07:23.921594   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:07:23.922188   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:07:23.922219   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:07:23.922662   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:07:23.922862   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:07:23.956685   31467 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:07:23.957917   31467 start.go:298] selected driver: kvm2
	I0116 23:07:23.957931   31467 start.go:902] validating driver "kvm2" against &{Name:multinode-328490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:07:23.958083   31467 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:07:23.958470   31467 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:07:23.958558   31467 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:07:23.972245   31467 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:07:23.972951   31467 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:07:23.972984   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:07:23.972994   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:07:23.973007   31467 start_flags.go:321] config:
	{Name:multinode-328490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-328490 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:07:23.973206   31467 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:07:23.975028   31467 out.go:177] * Starting control plane node multinode-328490 in cluster multinode-328490
	I0116 23:07:23.976147   31467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:07:23.976183   31467 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:07:23.976197   31467 cache.go:56] Caching tarball of preloaded images
	I0116 23:07:23.976266   31467 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:07:23.976280   31467 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:07:23.976422   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:07:23.976648   31467 start.go:365] acquiring machines lock for multinode-328490: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:07:23.976688   31467 start.go:369] acquired machines lock for "multinode-328490" in 22.732µs
	I0116 23:07:23.976699   31467 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:07:23.976703   31467 fix.go:54] fixHost starting: 
	I0116 23:07:23.976997   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:07:23.977038   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:07:23.990363   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0116 23:07:23.990779   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:07:23.991301   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:07:23.991322   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:07:23.991645   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:07:23.991824   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:07:23.991997   31467 main.go:141] libmachine: (multinode-328490) Calling .GetState
	I0116 23:07:23.993490   31467 fix.go:102] recreateIfNeeded on multinode-328490: state=Running err=<nil>
	W0116 23:07:23.993510   31467 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:07:23.995531   31467 out.go:177] * Updating the running kvm2 "multinode-328490" VM ...
	I0116 23:07:23.997177   31467 machine.go:88] provisioning docker machine ...
	I0116 23:07:23.997197   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:07:23.997399   31467 main.go:141] libmachine: (multinode-328490) Calling .GetMachineName
	I0116 23:07:23.997528   31467 buildroot.go:166] provisioning hostname "multinode-328490"
	I0116 23:07:23.997547   31467 main.go:141] libmachine: (multinode-328490) Calling .GetMachineName
	I0116 23:07:23.997685   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:07:23.999895   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:07:24.000300   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:01:27 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:07:24.000334   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:07:24.000439   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:07:24.000602   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:07:24.000756   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:07:24.000929   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:07:24.001068   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:07:24.001465   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 23:07:24.001482   31467 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328490 && echo "multinode-328490" | sudo tee /etc/hostname
	I0116 23:07:42.478578   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:07:48.558645   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:07:51.630593   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:07:57.710663   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:00.782609   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:06.866660   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:09.934603   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:16.014576   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:19.086611   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:25.166631   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:28.238623   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:34.318633   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:37.390588   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:43.470639   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:46.542654   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:52.622592   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:08:55.694677   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:01.774617   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:04.846647   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:10.926593   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:13.998597   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:20.078637   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:23.150580   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:29.230643   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:32.302622   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:38.382622   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:41.454579   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:47.534589   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:50.606573   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:56.686641   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:09:59.758586   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:05.838579   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:08.910592   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:14.990633   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:18.062628   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:24.142644   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:27.214572   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:33.294692   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:36.366594   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:42.446677   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:45.518618   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:51.598590   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:10:54.670684   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:00.750587   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:03.822596   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:09.902592   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:12.974617   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:19.054638   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:22.126587   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:28.206656   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:31.278651   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:37.362606   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:40.430602   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:46.510638   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:49.582652   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:55.662604   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:11:58.734636   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:12:04.814613   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:12:07.886598   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:12:13.966587   31467 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.50:22: connect: no route to host
	I0116 23:12:16.969607   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:12:16.969646   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:16.971418   31467 machine.go:91] provisioned docker machine in 4m52.974221344s
	I0116 23:12:16.971457   31467 fix.go:56] fixHost completed within 4m52.99475436s
	I0116 23:12:16.971468   31467 start.go:83] releasing machines lock for "multinode-328490", held for 4m52.994769134s
	W0116 23:12:16.971492   31467 start.go:694] error starting host: provision: host is not running
	W0116 23:12:16.971570   31467 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:12:16.971582   31467 start.go:709] Will try again in 5 seconds ...
	I0116 23:12:21.971823   31467 start.go:365] acquiring machines lock for multinode-328490: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:12:21.971942   31467 start.go:369] acquired machines lock for "multinode-328490" in 58.785µs
	I0116 23:12:21.971963   31467 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:12:21.971970   31467 fix.go:54] fixHost starting: 
	I0116 23:12:21.972241   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:12:21.972263   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:12:21.986429   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0116 23:12:21.986847   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:12:21.987345   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:12:21.987371   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:12:21.987677   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:12:21.987853   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:21.988015   31467 main.go:141] libmachine: (multinode-328490) Calling .GetState
	I0116 23:12:21.989518   31467 fix.go:102] recreateIfNeeded on multinode-328490: state=Stopped err=<nil>
	I0116 23:12:21.989544   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	W0116 23:12:21.989701   31467 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:12:21.992016   31467 out.go:177] * Restarting existing kvm2 VM for "multinode-328490" ...
	I0116 23:12:21.993571   31467 main.go:141] libmachine: (multinode-328490) Calling .Start
	I0116 23:12:21.993739   31467 main.go:141] libmachine: (multinode-328490) Ensuring networks are active...
	I0116 23:12:21.994613   31467 main.go:141] libmachine: (multinode-328490) Ensuring network default is active
	I0116 23:12:21.994958   31467 main.go:141] libmachine: (multinode-328490) Ensuring network mk-multinode-328490 is active
	I0116 23:12:21.995329   31467 main.go:141] libmachine: (multinode-328490) Getting domain xml...
	I0116 23:12:21.996006   31467 main.go:141] libmachine: (multinode-328490) Creating domain...
	I0116 23:12:23.167048   31467 main.go:141] libmachine: (multinode-328490) Waiting to get IP...
	I0116 23:12:23.168099   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:23.168679   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:23.168776   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:23.168690   32237 retry.go:31] will retry after 259.553931ms: waiting for machine to come up
	I0116 23:12:23.430164   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:23.430642   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:23.430664   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:23.430599   32237 retry.go:31] will retry after 298.50991ms: waiting for machine to come up
	I0116 23:12:23.731077   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:23.731528   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:23.731555   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:23.731463   32237 retry.go:31] will retry after 453.141969ms: waiting for machine to come up
	I0116 23:12:24.185919   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:24.186360   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:24.186389   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:24.186304   32237 retry.go:31] will retry after 398.092961ms: waiting for machine to come up
	I0116 23:12:24.585821   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:24.586294   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:24.586326   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:24.586245   32237 retry.go:31] will retry after 545.490873ms: waiting for machine to come up
	I0116 23:12:25.132944   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:25.133427   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:25.133455   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:25.133345   32237 retry.go:31] will retry after 647.240176ms: waiting for machine to come up
	I0116 23:12:25.782099   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:25.782506   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:25.782537   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:25.782469   32237 retry.go:31] will retry after 834.580051ms: waiting for machine to come up
	I0116 23:12:26.618433   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:26.619026   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:26.619052   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:26.618957   32237 retry.go:31] will retry after 1.255744739s: waiting for machine to come up
	I0116 23:12:27.876652   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:27.877093   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:27.877124   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:27.877067   32237 retry.go:31] will retry after 1.669402621s: waiting for machine to come up
	I0116 23:12:29.548847   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:29.549398   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:29.549422   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:29.549347   32237 retry.go:31] will retry after 1.779076556s: waiting for machine to come up
	I0116 23:12:31.331306   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:31.331763   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:31.331788   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:31.331712   32237 retry.go:31] will retry after 2.071278494s: waiting for machine to come up
	I0116 23:12:33.405451   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:33.405884   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:33.405914   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:33.405839   32237 retry.go:31] will retry after 3.529704857s: waiting for machine to come up
	I0116 23:12:36.939427   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:36.939902   31467 main.go:141] libmachine: (multinode-328490) DBG | unable to find current IP address of domain multinode-328490 in network mk-multinode-328490
	I0116 23:12:36.939927   31467 main.go:141] libmachine: (multinode-328490) DBG | I0116 23:12:36.939858   32237 retry.go:31] will retry after 3.271327133s: waiting for machine to come up
	I0116 23:12:40.212330   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.212785   31467 main.go:141] libmachine: (multinode-328490) Found IP for machine: 192.168.39.50
	I0116 23:12:40.212807   31467 main.go:141] libmachine: (multinode-328490) Reserving static IP address...
	I0116 23:12:40.212823   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has current primary IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.213247   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "multinode-328490", mac: "52:54:00:b2:25:4f", ip: "192.168.39.50"} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.213266   31467 main.go:141] libmachine: (multinode-328490) DBG | skip adding static IP to network mk-multinode-328490 - found existing host DHCP lease matching {name: "multinode-328490", mac: "52:54:00:b2:25:4f", ip: "192.168.39.50"}
	I0116 23:12:40.213276   31467 main.go:141] libmachine: (multinode-328490) Reserved static IP address: 192.168.39.50
	I0116 23:12:40.213286   31467 main.go:141] libmachine: (multinode-328490) Waiting for SSH to be available...
	I0116 23:12:40.213294   31467 main.go:141] libmachine: (multinode-328490) DBG | Getting to WaitForSSH function...
	I0116 23:12:40.215512   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.215905   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.215931   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.216078   31467 main.go:141] libmachine: (multinode-328490) DBG | Using SSH client type: external
	I0116 23:12:40.216101   31467 main.go:141] libmachine: (multinode-328490) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa (-rw-------)
	I0116 23:12:40.216136   31467 main.go:141] libmachine: (multinode-328490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:12:40.216152   31467 main.go:141] libmachine: (multinode-328490) DBG | About to run SSH command:
	I0116 23:12:40.216174   31467 main.go:141] libmachine: (multinode-328490) DBG | exit 0
	I0116 23:12:40.309888   31467 main.go:141] libmachine: (multinode-328490) DBG | SSH cmd err, output: <nil>: 
	I0116 23:12:40.310285   31467 main.go:141] libmachine: (multinode-328490) Calling .GetConfigRaw
	I0116 23:12:40.310872   31467 main.go:141] libmachine: (multinode-328490) Calling .GetIP
	I0116 23:12:40.313392   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.313738   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.313764   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.314035   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:12:40.314246   31467 machine.go:88] provisioning docker machine ...
	I0116 23:12:40.314271   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:40.314497   31467 main.go:141] libmachine: (multinode-328490) Calling .GetMachineName
	I0116 23:12:40.314661   31467 buildroot.go:166] provisioning hostname "multinode-328490"
	I0116 23:12:40.314686   31467 main.go:141] libmachine: (multinode-328490) Calling .GetMachineName
	I0116 23:12:40.314829   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:40.317368   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.317749   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.317778   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.317906   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:40.318080   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:40.318237   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:40.318372   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:40.318509   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:12:40.318880   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 23:12:40.318894   31467 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328490 && echo "multinode-328490" | sudo tee /etc/hostname
	I0116 23:12:40.457536   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328490
	
	I0116 23:12:40.457573   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:40.460238   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.460603   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.460637   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.460739   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:40.460966   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:40.461134   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:40.461266   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:40.461444   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:12:40.461780   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 23:12:40.461798   31467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328490/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:12:40.593902   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:12:40.593938   31467 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:12:40.593957   31467 buildroot.go:174] setting up certificates
	I0116 23:12:40.593968   31467 provision.go:83] configureAuth start
	I0116 23:12:40.593976   31467 main.go:141] libmachine: (multinode-328490) Calling .GetMachineName
	I0116 23:12:40.594238   31467 main.go:141] libmachine: (multinode-328490) Calling .GetIP
	I0116 23:12:40.596688   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.597099   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.597137   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.597235   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:40.599452   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.599859   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.599881   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.600017   31467 provision.go:138] copyHostCerts
	I0116 23:12:40.600054   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:12:40.600084   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:12:40.600094   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:12:40.600155   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:12:40.600232   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:12:40.600248   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:12:40.600254   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:12:40.600277   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:12:40.600317   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:12:40.600331   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:12:40.600338   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:12:40.600358   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:12:40.600406   31467 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.multinode-328490 san=[192.168.39.50 192.168.39.50 localhost 127.0.0.1 minikube multinode-328490]
	I0116 23:12:40.909576   31467 provision.go:172] copyRemoteCerts
	I0116 23:12:40.909636   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:12:40.909658   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:40.912260   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.912560   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:40.912592   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:40.912756   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:40.912939   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:40.913066   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:40.913187   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:12:41.003751   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 23:12:41.003824   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:12:41.024257   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 23:12:41.024337   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 23:12:41.044677   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 23:12:41.044753   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:12:41.065330   31467 provision.go:86] duration metric: configureAuth took 471.347842ms
	I0116 23:12:41.065364   31467 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:12:41.065648   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:12:41.065724   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:41.069075   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.069509   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.069543   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.069737   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:41.069936   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.070102   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.070215   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:41.070380   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:12:41.070718   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 23:12:41.070740   31467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:12:41.384718   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:12:41.384742   31467 machine.go:91] provisioned docker machine in 1.070480803s
	I0116 23:12:41.384752   31467 start.go:300] post-start starting for "multinode-328490" (driver="kvm2")
	I0116 23:12:41.384766   31467 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:12:41.384796   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:41.385126   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:12:41.385153   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:41.388017   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.388474   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.388507   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.388687   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:41.388874   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.389047   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:41.389232   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:12:41.484322   31467 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:12:41.488426   31467 command_runner.go:130] > NAME=Buildroot
	I0116 23:12:41.488449   31467 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 23:12:41.488456   31467 command_runner.go:130] > ID=buildroot
	I0116 23:12:41.488465   31467 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 23:12:41.488473   31467 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 23:12:41.488667   31467 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:12:41.488684   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:12:41.488775   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:12:41.488871   31467 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:12:41.488883   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /etc/ssl/certs/149302.pem
	I0116 23:12:41.488999   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:12:41.497991   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:12:41.519512   31467 start.go:303] post-start completed in 134.743829ms
	I0116 23:12:41.519559   31467 fix.go:56] fixHost completed within 19.547571194s
	I0116 23:12:41.519584   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:41.522433   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.522897   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.522931   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.523108   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:41.523347   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.523525   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.523693   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:41.523862   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:12:41.524231   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0116 23:12:41.524245   31467 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:12:41.655142   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705446761.610612209
	
	I0116 23:12:41.655166   31467 fix.go:206] guest clock: 1705446761.610612209
	I0116 23:12:41.655176   31467 fix.go:219] Guest: 2024-01-16 23:12:41.610612209 +0000 UTC Remote: 2024-01-16 23:12:41.519563811 +0000 UTC m=+317.674883948 (delta=91.048398ms)
	I0116 23:12:41.655200   31467 fix.go:190] guest clock delta is within tolerance: 91.048398ms
	I0116 23:12:41.655206   31467 start.go:83] releasing machines lock for "multinode-328490", held for 19.683256579s
	I0116 23:12:41.655230   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:41.655549   31467 main.go:141] libmachine: (multinode-328490) Calling .GetIP
	I0116 23:12:41.658007   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.658470   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.658495   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.658751   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:41.659189   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:41.659337   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:12:41.659481   31467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:12:41.659548   31467 ssh_runner.go:195] Run: cat /version.json
	I0116 23:12:41.659594   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:41.659549   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:12:41.662205   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.662533   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.662695   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.662720   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.662872   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:41.662982   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:41.663008   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:41.663056   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.663139   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:12:41.663206   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:41.663267   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:12:41.663323   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:12:41.663369   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:12:41.663476   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:12:41.751392   31467 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 23:12:41.751877   31467 ssh_runner.go:195] Run: systemctl --version
	I0116 23:12:41.784782   31467 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 23:12:41.784827   31467 command_runner.go:130] > systemd 247 (247)
	I0116 23:12:41.784855   31467 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 23:12:41.784932   31467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:12:41.925408   31467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 23:12:41.931039   31467 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 23:12:41.931189   31467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:12:41.931253   31467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:12:41.950995   31467 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 23:12:41.951069   31467 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:12:41.951083   31467 start.go:475] detecting cgroup driver to use...
	I0116 23:12:41.951158   31467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:12:41.967476   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:12:41.982378   31467 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:12:41.982449   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:12:41.997336   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:12:42.011406   31467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:12:42.026309   31467 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 23:12:42.121903   31467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:12:42.241665   31467 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 23:12:42.241769   31467 docker.go:233] disabling docker service ...
	I0116 23:12:42.241824   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:12:42.254841   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:12:42.266415   31467 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 23:12:42.266517   31467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:12:42.279229   31467 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 23:12:42.379208   31467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:12:42.391298   31467 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 23:12:42.391642   31467 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 23:12:42.491122   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:12:42.502991   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:12:42.519097   31467 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 23:12:42.519130   31467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:12:42.519194   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:12:42.528104   31467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:12:42.528173   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:12:42.537131   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:12:42.546644   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:12:42.555707   31467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:12:42.564805   31467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:12:42.572379   31467 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:12:42.572422   31467 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:12:42.572466   31467 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:12:42.584118   31467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:12:42.591946   31467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:12:42.699342   31467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:12:42.859145   31467 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:12:42.859218   31467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:12:42.864195   31467 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 23:12:42.864228   31467 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 23:12:42.864238   31467 command_runner.go:130] > Device: 16h/22d	Inode: 750         Links: 1
	I0116 23:12:42.864248   31467 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:12:42.864254   31467 command_runner.go:130] > Access: 2024-01-16 23:12:42.800314209 +0000
	I0116 23:12:42.864260   31467 command_runner.go:130] > Modify: 2024-01-16 23:12:42.800314209 +0000
	I0116 23:12:42.864265   31467 command_runner.go:130] > Change: 2024-01-16 23:12:42.800314209 +0000
	I0116 23:12:42.864269   31467 command_runner.go:130] >  Birth: -
	I0116 23:12:42.864295   31467 start.go:543] Will wait 60s for crictl version
	I0116 23:12:42.864359   31467 ssh_runner.go:195] Run: which crictl
	I0116 23:12:42.867812   31467 command_runner.go:130] > /usr/bin/crictl
	I0116 23:12:42.867980   31467 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:12:42.903036   31467 command_runner.go:130] > Version:  0.1.0
	I0116 23:12:42.903060   31467 command_runner.go:130] > RuntimeName:  cri-o
	I0116 23:12:42.903065   31467 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 23:12:42.903070   31467 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 23:12:42.903088   31467 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:12:42.903156   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:12:42.943397   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:12:42.943419   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:12:42.943426   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:12:42.943430   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:12:42.943436   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:12:42.943440   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:12:42.943444   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:12:42.943449   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:12:42.943454   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:12:42.943461   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:12:42.943465   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:12:42.943469   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:12:42.943578   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:12:42.985872   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:12:42.985893   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:12:42.985900   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:12:42.985904   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:12:42.985910   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:12:42.985915   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:12:42.985919   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:12:42.985923   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:12:42.985928   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:12:42.985934   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:12:42.985939   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:12:42.985943   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:12:42.989263   31467 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:12:42.990780   31467 main.go:141] libmachine: (multinode-328490) Calling .GetIP
	I0116 23:12:42.993738   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:42.994058   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:12:42.994089   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:12:42.994317   31467 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:12:42.998138   31467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:12:43.009757   31467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:12:43.009817   31467 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:12:43.042849   31467 command_runner.go:130] > {
	I0116 23:12:43.042869   31467 command_runner.go:130] >   "images": [
	I0116 23:12:43.042877   31467 command_runner.go:130] >     {
	I0116 23:12:43.042885   31467 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 23:12:43.042890   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:43.042895   31467 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 23:12:43.042899   31467 command_runner.go:130] >       ],
	I0116 23:12:43.042903   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:43.042911   31467 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 23:12:43.042918   31467 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 23:12:43.042922   31467 command_runner.go:130] >       ],
	I0116 23:12:43.042926   31467 command_runner.go:130] >       "size": "750414",
	I0116 23:12:43.042932   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:43.042937   31467 command_runner.go:130] >         "value": "65535"
	I0116 23:12:43.042944   31467 command_runner.go:130] >       },
	I0116 23:12:43.042947   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:43.042963   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:43.042970   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:43.042974   31467 command_runner.go:130] >     }
	I0116 23:12:43.042980   31467 command_runner.go:130] >   ]
	I0116 23:12:43.042985   31467 command_runner.go:130] > }
	I0116 23:12:43.044256   31467 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:12:43.044331   31467 ssh_runner.go:195] Run: which lz4
	I0116 23:12:43.048140   31467 command_runner.go:130] > /usr/bin/lz4
	I0116 23:12:43.048170   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 23:12:43.048242   31467 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:12:43.052476   31467 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:12:43.052783   31467 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:12:43.052807   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:12:44.635460   31467 crio.go:444] Took 1.587239 seconds to copy over tarball
	I0116 23:12:44.635543   31467 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:12:47.299295   31467 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.66372298s)
	I0116 23:12:47.299329   31467 crio.go:451] Took 2.663842 seconds to extract the tarball
	I0116 23:12:47.299342   31467 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:12:47.338931   31467 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:12:47.384873   31467 command_runner.go:130] > {
	I0116 23:12:47.384908   31467 command_runner.go:130] >   "images": [
	I0116 23:12:47.384915   31467 command_runner.go:130] >     {
	I0116 23:12:47.384927   31467 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 23:12:47.384935   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.384949   31467 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 23:12:47.384959   31467 command_runner.go:130] >       ],
	I0116 23:12:47.384966   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.384982   31467 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 23:12:47.384995   31467 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 23:12:47.385004   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385030   31467 command_runner.go:130] >       "size": "65258016",
	I0116 23:12:47.385040   31467 command_runner.go:130] >       "uid": null,
	I0116 23:12:47.385047   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385060   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385070   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385076   31467 command_runner.go:130] >     },
	I0116 23:12:47.385085   31467 command_runner.go:130] >     {
	I0116 23:12:47.385095   31467 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 23:12:47.385105   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385117   31467 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 23:12:47.385124   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385134   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385151   31467 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 23:12:47.385167   31467 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 23:12:47.385175   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385182   31467 command_runner.go:130] >       "size": "31470524",
	I0116 23:12:47.385186   31467 command_runner.go:130] >       "uid": null,
	I0116 23:12:47.385191   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385197   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385201   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385208   31467 command_runner.go:130] >     },
	I0116 23:12:47.385211   31467 command_runner.go:130] >     {
	I0116 23:12:47.385217   31467 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 23:12:47.385223   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385229   31467 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 23:12:47.385234   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385238   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385248   31467 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 23:12:47.385257   31467 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 23:12:47.385264   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385271   31467 command_runner.go:130] >       "size": "53621675",
	I0116 23:12:47.385278   31467 command_runner.go:130] >       "uid": null,
	I0116 23:12:47.385282   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385285   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385291   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385295   31467 command_runner.go:130] >     },
	I0116 23:12:47.385301   31467 command_runner.go:130] >     {
	I0116 23:12:47.385307   31467 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 23:12:47.385314   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385319   31467 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 23:12:47.385324   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385328   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385338   31467 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 23:12:47.385347   31467 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 23:12:47.385359   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385366   31467 command_runner.go:130] >       "size": "295456551",
	I0116 23:12:47.385369   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:47.385376   31467 command_runner.go:130] >         "value": "0"
	I0116 23:12:47.385381   31467 command_runner.go:130] >       },
	I0116 23:12:47.385388   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385393   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385399   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385403   31467 command_runner.go:130] >     },
	I0116 23:12:47.385408   31467 command_runner.go:130] >     {
	I0116 23:12:47.385416   31467 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 23:12:47.385423   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385428   31467 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 23:12:47.385432   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385437   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385446   31467 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 23:12:47.385455   31467 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 23:12:47.385461   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385465   31467 command_runner.go:130] >       "size": "127226832",
	I0116 23:12:47.385469   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:47.385474   31467 command_runner.go:130] >         "value": "0"
	I0116 23:12:47.385480   31467 command_runner.go:130] >       },
	I0116 23:12:47.385487   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385495   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385500   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385506   31467 command_runner.go:130] >     },
	I0116 23:12:47.385510   31467 command_runner.go:130] >     {
	I0116 23:12:47.385518   31467 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 23:12:47.385524   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385530   31467 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 23:12:47.385534   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385537   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385545   31467 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 23:12:47.385552   31467 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 23:12:47.385558   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385562   31467 command_runner.go:130] >       "size": "123261750",
	I0116 23:12:47.385566   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:47.385571   31467 command_runner.go:130] >         "value": "0"
	I0116 23:12:47.385576   31467 command_runner.go:130] >       },
	I0116 23:12:47.385581   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385591   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385595   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385599   31467 command_runner.go:130] >     },
	I0116 23:12:47.385602   31467 command_runner.go:130] >     {
	I0116 23:12:47.385608   31467 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 23:12:47.385621   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385626   31467 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 23:12:47.385632   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385636   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385646   31467 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 23:12:47.385655   31467 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 23:12:47.385659   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385665   31467 command_runner.go:130] >       "size": "74749335",
	I0116 23:12:47.385669   31467 command_runner.go:130] >       "uid": null,
	I0116 23:12:47.385674   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385678   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385685   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385688   31467 command_runner.go:130] >     },
	I0116 23:12:47.385697   31467 command_runner.go:130] >     {
	I0116 23:12:47.385703   31467 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 23:12:47.385710   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385715   31467 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 23:12:47.385721   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385725   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385745   31467 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 23:12:47.385755   31467 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 23:12:47.385759   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385763   31467 command_runner.go:130] >       "size": "61551410",
	I0116 23:12:47.385769   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:47.385773   31467 command_runner.go:130] >         "value": "0"
	I0116 23:12:47.385778   31467 command_runner.go:130] >       },
	I0116 23:12:47.385782   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385788   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385792   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385798   31467 command_runner.go:130] >     },
	I0116 23:12:47.385802   31467 command_runner.go:130] >     {
	I0116 23:12:47.385812   31467 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 23:12:47.385818   31467 command_runner.go:130] >       "repoTags": [
	I0116 23:12:47.385823   31467 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 23:12:47.385830   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385836   31467 command_runner.go:130] >       "repoDigests": [
	I0116 23:12:47.385855   31467 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 23:12:47.385869   31467 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 23:12:47.385875   31467 command_runner.go:130] >       ],
	I0116 23:12:47.385883   31467 command_runner.go:130] >       "size": "750414",
	I0116 23:12:47.385891   31467 command_runner.go:130] >       "uid": {
	I0116 23:12:47.385899   31467 command_runner.go:130] >         "value": "65535"
	I0116 23:12:47.385907   31467 command_runner.go:130] >       },
	I0116 23:12:47.385914   31467 command_runner.go:130] >       "username": "",
	I0116 23:12:47.385923   31467 command_runner.go:130] >       "spec": null,
	I0116 23:12:47.385929   31467 command_runner.go:130] >       "pinned": false
	I0116 23:12:47.385938   31467 command_runner.go:130] >     }
	I0116 23:12:47.385943   31467 command_runner.go:130] >   ]
	I0116 23:12:47.385951   31467 command_runner.go:130] > }
	I0116 23:12:47.386081   31467 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:12:47.386094   31467 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:12:47.386150   31467 ssh_runner.go:195] Run: crio config
	I0116 23:12:47.430018   31467 command_runner.go:130] ! time="2024-01-16 23:12:47.384942525Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 23:12:47.430049   31467 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 23:12:47.435299   31467 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 23:12:47.435328   31467 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 23:12:47.435339   31467 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 23:12:47.435343   31467 command_runner.go:130] > #
	I0116 23:12:47.435360   31467 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 23:12:47.435370   31467 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 23:12:47.435380   31467 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 23:12:47.435396   31467 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 23:12:47.435406   31467 command_runner.go:130] > # reload'.
	I0116 23:12:47.435418   31467 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 23:12:47.435433   31467 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 23:12:47.435443   31467 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 23:12:47.435454   31467 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 23:12:47.435457   31467 command_runner.go:130] > [crio]
	I0116 23:12:47.435467   31467 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 23:12:47.435474   31467 command_runner.go:130] > # containers images, in this directory.
	I0116 23:12:47.435480   31467 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 23:12:47.435493   31467 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 23:12:47.435500   31467 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 23:12:47.435506   31467 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 23:12:47.435538   31467 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 23:12:47.435543   31467 command_runner.go:130] > storage_driver = "overlay"
	I0116 23:12:47.435548   31467 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 23:12:47.435554   31467 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 23:12:47.435558   31467 command_runner.go:130] > storage_option = [
	I0116 23:12:47.435562   31467 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 23:12:47.435566   31467 command_runner.go:130] > ]
	I0116 23:12:47.435572   31467 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 23:12:47.435578   31467 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 23:12:47.435582   31467 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 23:12:47.435591   31467 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 23:12:47.435597   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 23:12:47.435602   31467 command_runner.go:130] > # always happen on a node reboot
	I0116 23:12:47.435607   31467 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 23:12:47.435616   31467 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 23:12:47.435622   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 23:12:47.435652   31467 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 23:12:47.435662   31467 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 23:12:47.435669   31467 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 23:12:47.435680   31467 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 23:12:47.435685   31467 command_runner.go:130] > # internal_wipe = true
	I0116 23:12:47.435691   31467 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 23:12:47.435699   31467 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 23:12:47.435705   31467 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 23:12:47.435718   31467 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 23:12:47.435726   31467 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 23:12:47.435730   31467 command_runner.go:130] > [crio.api]
	I0116 23:12:47.435735   31467 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 23:12:47.435744   31467 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 23:12:47.435750   31467 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 23:12:47.435757   31467 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 23:12:47.435763   31467 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 23:12:47.435773   31467 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 23:12:47.435777   31467 command_runner.go:130] > # stream_port = "0"
	I0116 23:12:47.435784   31467 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 23:12:47.435788   31467 command_runner.go:130] > # stream_enable_tls = false
	I0116 23:12:47.435797   31467 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 23:12:47.435801   31467 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 23:12:47.435807   31467 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 23:12:47.435815   31467 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 23:12:47.435819   31467 command_runner.go:130] > # minutes.
	I0116 23:12:47.435826   31467 command_runner.go:130] > # stream_tls_cert = ""
	I0116 23:12:47.435832   31467 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 23:12:47.435840   31467 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 23:12:47.435844   31467 command_runner.go:130] > # stream_tls_key = ""
	I0116 23:12:47.435851   31467 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 23:12:47.435859   31467 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 23:12:47.435867   31467 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 23:12:47.435871   31467 command_runner.go:130] > # stream_tls_ca = ""
	I0116 23:12:47.435878   31467 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:12:47.435885   31467 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 23:12:47.435891   31467 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:12:47.435898   31467 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 23:12:47.435919   31467 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 23:12:47.435927   31467 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 23:12:47.435932   31467 command_runner.go:130] > [crio.runtime]
	I0116 23:12:47.435940   31467 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 23:12:47.435946   31467 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 23:12:47.435953   31467 command_runner.go:130] > # "nofile=1024:2048"
	I0116 23:12:47.435959   31467 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 23:12:47.435965   31467 command_runner.go:130] > # default_ulimits = [
	I0116 23:12:47.435969   31467 command_runner.go:130] > # ]
	I0116 23:12:47.435977   31467 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 23:12:47.435982   31467 command_runner.go:130] > # no_pivot = false
	I0116 23:12:47.435991   31467 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 23:12:47.436000   31467 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 23:12:47.436006   31467 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 23:12:47.436014   31467 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 23:12:47.436022   31467 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 23:12:47.436028   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:12:47.436035   31467 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 23:12:47.436039   31467 command_runner.go:130] > # Cgroup setting for conmon
	I0116 23:12:47.436048   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 23:12:47.436054   31467 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 23:12:47.436060   31467 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 23:12:47.436068   31467 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 23:12:47.436074   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:12:47.436080   31467 command_runner.go:130] > conmon_env = [
	I0116 23:12:47.436086   31467 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 23:12:47.436092   31467 command_runner.go:130] > ]
	I0116 23:12:47.436097   31467 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 23:12:47.436107   31467 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 23:12:47.436122   31467 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 23:12:47.436128   31467 command_runner.go:130] > # default_env = [
	I0116 23:12:47.436133   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436140   31467 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 23:12:47.436144   31467 command_runner.go:130] > # selinux = false
	I0116 23:12:47.436153   31467 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 23:12:47.436161   31467 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 23:12:47.436169   31467 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 23:12:47.436174   31467 command_runner.go:130] > # seccomp_profile = ""
	I0116 23:12:47.436179   31467 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 23:12:47.436187   31467 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 23:12:47.436194   31467 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 23:12:47.436201   31467 command_runner.go:130] > # which might increase security.
	I0116 23:12:47.436205   31467 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 23:12:47.436212   31467 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 23:12:47.436220   31467 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 23:12:47.436227   31467 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 23:12:47.436235   31467 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 23:12:47.436243   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:12:47.436250   31467 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 23:12:47.436255   31467 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 23:12:47.436262   31467 command_runner.go:130] > # the cgroup blockio controller.
	I0116 23:12:47.436267   31467 command_runner.go:130] > # blockio_config_file = ""
	I0116 23:12:47.436275   31467 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 23:12:47.436282   31467 command_runner.go:130] > # irqbalance daemon.
	I0116 23:12:47.436287   31467 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 23:12:47.436296   31467 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 23:12:47.436303   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:12:47.436310   31467 command_runner.go:130] > # rdt_config_file = ""
	I0116 23:12:47.436316   31467 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 23:12:47.436322   31467 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 23:12:47.436330   31467 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 23:12:47.436337   31467 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 23:12:47.436343   31467 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 23:12:47.436351   31467 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 23:12:47.436357   31467 command_runner.go:130] > # will be added.
	I0116 23:12:47.436363   31467 command_runner.go:130] > # default_capabilities = [
	I0116 23:12:47.436369   31467 command_runner.go:130] > # 	"CHOWN",
	I0116 23:12:47.436373   31467 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 23:12:47.436379   31467 command_runner.go:130] > # 	"FSETID",
	I0116 23:12:47.436383   31467 command_runner.go:130] > # 	"FOWNER",
	I0116 23:12:47.436389   31467 command_runner.go:130] > # 	"SETGID",
	I0116 23:12:47.436393   31467 command_runner.go:130] > # 	"SETUID",
	I0116 23:12:47.436399   31467 command_runner.go:130] > # 	"SETPCAP",
	I0116 23:12:47.436403   31467 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 23:12:47.436409   31467 command_runner.go:130] > # 	"KILL",
	I0116 23:12:47.436412   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436421   31467 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 23:12:47.436428   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:12:47.436435   31467 command_runner.go:130] > # default_sysctls = [
	I0116 23:12:47.436439   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436446   31467 command_runner.go:130] > # List of devices on the host that a
	I0116 23:12:47.436451   31467 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 23:12:47.436458   31467 command_runner.go:130] > # allowed_devices = [
	I0116 23:12:47.436464   31467 command_runner.go:130] > # 	"/dev/fuse",
	I0116 23:12:47.436470   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436475   31467 command_runner.go:130] > # List of additional devices. specified as
	I0116 23:12:47.436484   31467 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 23:12:47.436492   31467 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 23:12:47.436525   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:12:47.436533   31467 command_runner.go:130] > # additional_devices = [
	I0116 23:12:47.436537   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436543   31467 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 23:12:47.436547   31467 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 23:12:47.436551   31467 command_runner.go:130] > # 	"/etc/cdi",
	I0116 23:12:47.436557   31467 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 23:12:47.436561   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436569   31467 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 23:12:47.436577   31467 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 23:12:47.436581   31467 command_runner.go:130] > # Defaults to false.
	I0116 23:12:47.436588   31467 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 23:12:47.436594   31467 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 23:12:47.436604   31467 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 23:12:47.436610   31467 command_runner.go:130] > # hooks_dir = [
	I0116 23:12:47.436615   31467 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 23:12:47.436621   31467 command_runner.go:130] > # ]
	I0116 23:12:47.436627   31467 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 23:12:47.436636   31467 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 23:12:47.436641   31467 command_runner.go:130] > # its default mounts from the following two files:
	I0116 23:12:47.436644   31467 command_runner.go:130] > #
	I0116 23:12:47.436649   31467 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 23:12:47.436656   31467 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 23:12:47.436663   31467 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 23:12:47.436669   31467 command_runner.go:130] > #
	I0116 23:12:47.436675   31467 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 23:12:47.436683   31467 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 23:12:47.436692   31467 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 23:12:47.436699   31467 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 23:12:47.436702   31467 command_runner.go:130] > #
	I0116 23:12:47.436709   31467 command_runner.go:130] > # default_mounts_file = ""
	I0116 23:12:47.436718   31467 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 23:12:47.436727   31467 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 23:12:47.436733   31467 command_runner.go:130] > pids_limit = 1024
	I0116 23:12:47.436739   31467 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 23:12:47.436747   31467 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 23:12:47.436755   31467 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 23:12:47.436765   31467 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 23:12:47.436771   31467 command_runner.go:130] > # log_size_max = -1
	I0116 23:12:47.436778   31467 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 23:12:47.436785   31467 command_runner.go:130] > # log_to_journald = false
	I0116 23:12:47.436791   31467 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 23:12:47.436798   31467 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 23:12:47.436803   31467 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 23:12:47.436810   31467 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 23:12:47.436815   31467 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 23:12:47.436822   31467 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 23:12:47.436827   31467 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 23:12:47.436835   31467 command_runner.go:130] > # read_only = false
	I0116 23:12:47.436843   31467 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 23:12:47.436852   31467 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 23:12:47.436856   31467 command_runner.go:130] > # live configuration reload.
	I0116 23:12:47.436861   31467 command_runner.go:130] > # log_level = "info"
	I0116 23:12:47.436866   31467 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 23:12:47.436873   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:12:47.436878   31467 command_runner.go:130] > # log_filter = ""
	I0116 23:12:47.436886   31467 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 23:12:47.436893   31467 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 23:12:47.436899   31467 command_runner.go:130] > # separated by comma.
	I0116 23:12:47.436903   31467 command_runner.go:130] > # uid_mappings = ""
	I0116 23:12:47.436911   31467 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 23:12:47.436917   31467 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 23:12:47.436923   31467 command_runner.go:130] > # separated by comma.
	I0116 23:12:47.436927   31467 command_runner.go:130] > # gid_mappings = ""
	I0116 23:12:47.436933   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 23:12:47.436941   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:12:47.436947   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:12:47.436955   31467 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 23:12:47.436961   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 23:12:47.436969   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:12:47.436978   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:12:47.436985   31467 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 23:12:47.436991   31467 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 23:12:47.436999   31467 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 23:12:47.437006   31467 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 23:12:47.437013   31467 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 23:12:47.437019   31467 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 23:12:47.437026   31467 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 23:12:47.437032   31467 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 23:12:47.437039   31467 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 23:12:47.437044   31467 command_runner.go:130] > drop_infra_ctr = false
	I0116 23:12:47.437052   31467 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 23:12:47.437060   31467 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 23:12:47.437069   31467 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 23:12:47.437076   31467 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 23:12:47.437085   31467 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 23:12:47.437092   31467 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 23:12:47.437096   31467 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 23:12:47.437108   31467 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 23:12:47.437115   31467 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 23:12:47.437121   31467 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 23:12:47.437130   31467 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 23:12:47.437138   31467 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 23:12:47.437142   31467 command_runner.go:130] > # default_runtime = "runc"
	I0116 23:12:47.437150   31467 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 23:12:47.437157   31467 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 23:12:47.437168   31467 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 23:12:47.437175   31467 command_runner.go:130] > # creation as a file is not desired either.
	I0116 23:12:47.437183   31467 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 23:12:47.437190   31467 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 23:12:47.437195   31467 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 23:12:47.437198   31467 command_runner.go:130] > # ]
	I0116 23:12:47.437205   31467 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 23:12:47.437216   31467 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 23:12:47.437225   31467 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 23:12:47.437233   31467 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 23:12:47.437238   31467 command_runner.go:130] > #
	I0116 23:12:47.437243   31467 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 23:12:47.437250   31467 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 23:12:47.437255   31467 command_runner.go:130] > #  runtime_type = "oci"
	I0116 23:12:47.437262   31467 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 23:12:47.437267   31467 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 23:12:47.437274   31467 command_runner.go:130] > #  allowed_annotations = []
	I0116 23:12:47.437278   31467 command_runner.go:130] > # Where:
	I0116 23:12:47.437283   31467 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 23:12:47.437292   31467 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 23:12:47.437301   31467 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 23:12:47.437309   31467 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 23:12:47.437315   31467 command_runner.go:130] > #   in $PATH.
	I0116 23:12:47.437321   31467 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 23:12:47.437329   31467 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 23:12:47.437340   31467 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 23:12:47.437346   31467 command_runner.go:130] > #   state.
	I0116 23:12:47.437352   31467 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 23:12:47.437360   31467 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 23:12:47.437368   31467 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 23:12:47.437374   31467 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 23:12:47.437383   31467 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 23:12:47.437391   31467 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 23:12:47.437398   31467 command_runner.go:130] > #   The currently recognized values are:
	I0116 23:12:47.437405   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 23:12:47.437416   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 23:12:47.437424   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 23:12:47.437431   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 23:12:47.437440   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 23:12:47.437447   31467 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 23:12:47.437456   31467 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 23:12:47.437465   31467 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 23:12:47.437472   31467 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 23:12:47.437478   31467 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 23:12:47.437484   31467 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 23:12:47.437488   31467 command_runner.go:130] > runtime_type = "oci"
	I0116 23:12:47.437495   31467 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 23:12:47.437499   31467 command_runner.go:130] > runtime_config_path = ""
	I0116 23:12:47.437506   31467 command_runner.go:130] > monitor_path = ""
	I0116 23:12:47.437513   31467 command_runner.go:130] > monitor_cgroup = ""
	I0116 23:12:47.437520   31467 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 23:12:47.437526   31467 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 23:12:47.437532   31467 command_runner.go:130] > # running containers
	I0116 23:12:47.437537   31467 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 23:12:47.437545   31467 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 23:12:47.437591   31467 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 23:12:47.437600   31467 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 23:12:47.437605   31467 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 23:12:47.437611   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 23:12:47.437616   31467 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 23:12:47.437623   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 23:12:47.437630   31467 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 23:12:47.437637   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 23:12:47.437643   31467 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 23:12:47.437651   31467 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 23:12:47.437657   31467 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 23:12:47.437667   31467 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 23:12:47.437676   31467 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 23:12:47.437684   31467 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 23:12:47.437693   31467 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 23:12:47.437703   31467 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 23:12:47.437710   31467 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 23:12:47.437718   31467 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 23:12:47.437724   31467 command_runner.go:130] > # Example:
	I0116 23:12:47.437729   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 23:12:47.437737   31467 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 23:12:47.437744   31467 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 23:12:47.437750   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 23:12:47.437756   31467 command_runner.go:130] > # cpuset = 0
	I0116 23:12:47.437761   31467 command_runner.go:130] > # cpushares = "0-1"
	I0116 23:12:47.437767   31467 command_runner.go:130] > # Where:
	I0116 23:12:47.437772   31467 command_runner.go:130] > # The workload name is workload-type.
	I0116 23:12:47.437779   31467 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 23:12:47.437787   31467 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 23:12:47.437795   31467 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 23:12:47.437802   31467 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 23:12:47.437810   31467 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 23:12:47.437814   31467 command_runner.go:130] > # 
	I0116 23:12:47.437821   31467 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 23:12:47.437826   31467 command_runner.go:130] > #
	I0116 23:12:47.437832   31467 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 23:12:47.437840   31467 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 23:12:47.437846   31467 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 23:12:47.437855   31467 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 23:12:47.437860   31467 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 23:12:47.437864   31467 command_runner.go:130] > [crio.image]
	I0116 23:12:47.437870   31467 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 23:12:47.437880   31467 command_runner.go:130] > # default_transport = "docker://"
	I0116 23:12:47.437886   31467 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 23:12:47.437894   31467 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:12:47.437898   31467 command_runner.go:130] > # global_auth_file = ""
	I0116 23:12:47.437904   31467 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 23:12:47.437909   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:12:47.437916   31467 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 23:12:47.437922   31467 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 23:12:47.437930   31467 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:12:47.437935   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:12:47.437941   31467 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 23:12:47.437947   31467 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 23:12:47.437953   31467 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 23:12:47.437961   31467 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 23:12:47.437968   31467 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 23:12:47.437974   31467 command_runner.go:130] > # pause_command = "/pause"
	I0116 23:12:47.437980   31467 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 23:12:47.437989   31467 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 23:12:47.437999   31467 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 23:12:47.438006   31467 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 23:12:47.438010   31467 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 23:12:47.438014   31467 command_runner.go:130] > # signature_policy = ""
	I0116 23:12:47.438020   31467 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 23:12:47.438026   31467 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 23:12:47.438029   31467 command_runner.go:130] > # changing them here.
	I0116 23:12:47.438033   31467 command_runner.go:130] > # insecure_registries = [
	I0116 23:12:47.438036   31467 command_runner.go:130] > # ]
	I0116 23:12:47.438044   31467 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 23:12:47.438049   31467 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 23:12:47.438053   31467 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 23:12:47.438057   31467 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 23:12:47.438062   31467 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 23:12:47.438067   31467 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 23:12:47.438071   31467 command_runner.go:130] > # CNI plugins.
	I0116 23:12:47.438075   31467 command_runner.go:130] > [crio.network]
	I0116 23:12:47.438080   31467 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 23:12:47.438087   31467 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 23:12:47.438091   31467 command_runner.go:130] > # cni_default_network = ""
	I0116 23:12:47.438097   31467 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 23:12:47.438101   31467 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 23:12:47.438106   31467 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 23:12:47.438110   31467 command_runner.go:130] > # plugin_dirs = [
	I0116 23:12:47.438114   31467 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 23:12:47.438117   31467 command_runner.go:130] > # ]
	I0116 23:12:47.438123   31467 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 23:12:47.438127   31467 command_runner.go:130] > [crio.metrics]
	I0116 23:12:47.438133   31467 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 23:12:47.438137   31467 command_runner.go:130] > enable_metrics = true
	I0116 23:12:47.438142   31467 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 23:12:47.438146   31467 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 23:12:47.438152   31467 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 23:12:47.438158   31467 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 23:12:47.438163   31467 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 23:12:47.438167   31467 command_runner.go:130] > # metrics_collectors = [
	I0116 23:12:47.438173   31467 command_runner.go:130] > # 	"operations",
	I0116 23:12:47.438178   31467 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 23:12:47.438185   31467 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 23:12:47.438189   31467 command_runner.go:130] > # 	"operations_errors",
	I0116 23:12:47.438194   31467 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 23:12:47.438198   31467 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 23:12:47.438204   31467 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 23:12:47.438209   31467 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 23:12:47.438215   31467 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 23:12:47.438220   31467 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 23:12:47.438226   31467 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 23:12:47.438231   31467 command_runner.go:130] > # 	"containers_oom_total",
	I0116 23:12:47.438238   31467 command_runner.go:130] > # 	"containers_oom",
	I0116 23:12:47.438245   31467 command_runner.go:130] > # 	"processes_defunct",
	I0116 23:12:47.438250   31467 command_runner.go:130] > # 	"operations_total",
	I0116 23:12:47.438256   31467 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 23:12:47.438261   31467 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 23:12:47.438268   31467 command_runner.go:130] > # 	"operations_errors_total",
	I0116 23:12:47.438275   31467 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 23:12:47.438282   31467 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 23:12:47.438287   31467 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 23:12:47.438293   31467 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 23:12:47.438297   31467 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 23:12:47.438304   31467 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 23:12:47.438308   31467 command_runner.go:130] > # ]
	I0116 23:12:47.438315   31467 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 23:12:47.438319   31467 command_runner.go:130] > # metrics_port = 9090
	I0116 23:12:47.438326   31467 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 23:12:47.438330   31467 command_runner.go:130] > # metrics_socket = ""
	I0116 23:12:47.438355   31467 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 23:12:47.438367   31467 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 23:12:47.438379   31467 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 23:12:47.438390   31467 command_runner.go:130] > # certificate on any modification event.
	I0116 23:12:47.438397   31467 command_runner.go:130] > # metrics_cert = ""
	I0116 23:12:47.438421   31467 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 23:12:47.438431   31467 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 23:12:47.438441   31467 command_runner.go:130] > # metrics_key = ""
	I0116 23:12:47.438447   31467 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 23:12:47.438453   31467 command_runner.go:130] > [crio.tracing]
	I0116 23:12:47.438459   31467 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 23:12:47.438465   31467 command_runner.go:130] > # enable_tracing = false
	I0116 23:12:47.438470   31467 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 23:12:47.438478   31467 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 23:12:47.438483   31467 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 23:12:47.438490   31467 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 23:12:47.438496   31467 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 23:12:47.438502   31467 command_runner.go:130] > [crio.stats]
	I0116 23:12:47.438508   31467 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 23:12:47.438519   31467 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 23:12:47.438526   31467 command_runner.go:130] > # stats_collection_period = 0
	I0116 23:12:47.438612   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:12:47.438625   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:12:47.438642   31467 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:12:47.438660   31467 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328490 NodeName:multinode-328490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:12:47.438797   31467 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:12:47.438888   31467 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-328490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:12:47.438940   31467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:12:47.447959   31467 command_runner.go:130] > kubeadm
	I0116 23:12:47.447986   31467 command_runner.go:130] > kubectl
	I0116 23:12:47.447990   31467 command_runner.go:130] > kubelet
	I0116 23:12:47.448009   31467 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:12:47.448069   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:12:47.456448   31467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0116 23:12:47.471679   31467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:12:47.486782   31467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 23:12:47.502267   31467 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 23:12:47.505990   31467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:12:47.517701   31467 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490 for IP: 192.168.39.50
	I0116 23:12:47.517730   31467 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:12:47.517908   31467 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:12:47.517976   31467 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:12:47.518050   31467 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key
	I0116 23:12:47.518131   31467 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/apiserver.key.59dcb911
	I0116 23:12:47.518177   31467 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/proxy-client.key
	I0116 23:12:47.518196   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 23:12:47.518207   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 23:12:47.518217   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 23:12:47.518226   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 23:12:47.518234   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 23:12:47.518245   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 23:12:47.518258   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 23:12:47.518267   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 23:12:47.518384   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:12:47.518420   31467 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:12:47.518429   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:12:47.518452   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:12:47.518475   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:12:47.518495   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:12:47.518534   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:12:47.518572   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /usr/share/ca-certificates/149302.pem
	I0116 23:12:47.518591   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:12:47.518603   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem -> /usr/share/ca-certificates/14930.pem
	I0116 23:12:47.519253   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:12:47.542145   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:12:47.564482   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:12:47.586330   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:12:47.608632   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:12:47.630588   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:12:47.651668   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:12:47.673223   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:12:47.694165   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:12:47.714691   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:12:47.735443   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:12:47.756261   31467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:12:47.771286   31467 ssh_runner.go:195] Run: openssl version
	I0116 23:12:47.776352   31467 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 23:12:47.776596   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:12:47.785455   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:12:47.789596   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:12:47.789736   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:12:47.789785   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:12:47.794868   31467 command_runner.go:130] > 3ec20f2e
	I0116 23:12:47.794929   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:12:47.803566   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:12:47.812344   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:12:47.816573   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:12:47.816702   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:12:47.816754   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:12:47.821924   31467 command_runner.go:130] > b5213941
	I0116 23:12:47.822085   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:12:47.831230   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:12:47.840552   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:12:47.844640   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:12:47.844776   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:12:47.844839   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:12:47.850112   31467 command_runner.go:130] > 51391683
	I0116 23:12:47.850293   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:12:47.859243   31467 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:12:47.863131   31467 command_runner.go:130] > ca.crt
	I0116 23:12:47.863144   31467 command_runner.go:130] > ca.key
	I0116 23:12:47.863149   31467 command_runner.go:130] > healthcheck-client.crt
	I0116 23:12:47.863156   31467 command_runner.go:130] > healthcheck-client.key
	I0116 23:12:47.863160   31467 command_runner.go:130] > peer.crt
	I0116 23:12:47.863164   31467 command_runner.go:130] > peer.key
	I0116 23:12:47.863168   31467 command_runner.go:130] > server.crt
	I0116 23:12:47.863172   31467 command_runner.go:130] > server.key
	I0116 23:12:47.863351   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:12:47.868600   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.868665   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:12:47.873669   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.873905   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:12:47.878959   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.879222   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:12:47.884225   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.884504   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:12:47.889599   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.889660   31467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:12:47.894657   31467 command_runner.go:130] > Certificate will not expire
	I0116 23:12:47.894920   31467 kubeadm.go:404] StartCluster: {Name:multinode-328490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:12:47.895073   31467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:12:47.895134   31467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:12:47.929737   31467 cri.go:89] found id: ""
	I0116 23:12:47.929819   31467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:12:47.938425   31467 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0116 23:12:47.938445   31467 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0116 23:12:47.938451   31467 command_runner.go:130] > /var/lib/minikube/etcd:
	I0116 23:12:47.938458   31467 command_runner.go:130] > member
	I0116 23:12:47.938475   31467 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:12:47.938484   31467 kubeadm.go:636] restartCluster start
	I0116 23:12:47.938534   31467 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:12:47.946383   31467 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:47.946859   31467 kubeconfig.go:92] found "multinode-328490" server: "https://192.168.39.50:8443"
	I0116 23:12:47.947249   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:12:47.947496   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:12:47.948026   31467 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 23:12:47.948225   31467 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:12:47.955838   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:47.955891   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:47.965974   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:48.456596   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:48.456691   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:48.469201   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:48.956180   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:49.297421   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:49.309536   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:49.456808   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:49.456903   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:49.469966   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:49.956619   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:49.956692   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:49.967681   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:50.455996   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:50.456094   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:50.467509   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:50.956064   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:50.956171   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:50.967310   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:51.455862   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:51.455960   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:51.466600   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:51.956224   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:51.956311   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:51.967033   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:52.456125   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:52.456213   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:52.466969   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:52.956603   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:52.956704   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:52.967509   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:53.456077   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:53.456156   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:53.467862   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:53.956934   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:53.957021   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:53.967786   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:54.456409   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:54.456518   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:54.467277   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:54.955900   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:54.955993   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:54.967325   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:55.455869   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:55.455943   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:55.466985   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:55.956581   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:55.956668   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:55.967495   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:56.456057   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:56.456159   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:56.467331   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:56.955892   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:56.955988   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:56.966828   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:57.455859   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:57.455939   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:57.467079   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:57.955868   31467 api_server.go:166] Checking apiserver status ...
	I0116 23:12:57.955958   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:12:57.966894   31467 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:12:57.966926   31467 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:12:57.966947   31467 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:12:57.966960   31467 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:12:57.967022   31467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:12:58.002175   31467 cri.go:89] found id: ""
	I0116 23:12:58.002260   31467 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:12:58.017034   31467 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:12:58.025341   31467 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 23:12:58.025364   31467 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 23:12:58.025374   31467 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 23:12:58.025381   31467 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:12:58.025412   31467 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:12:58.025448   31467 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:12:58.033136   31467 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:12:58.033164   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:12:58.127230   31467 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:12:58.127518   31467 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 23:12:58.127936   31467 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 23:12:58.128318   31467 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:12:58.128811   31467 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0116 23:12:58.129293   31467 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:12:58.129991   31467 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0116 23:12:58.130496   31467 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0116 23:12:58.130970   31467 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:12:58.131272   31467 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:12:58.131672   31467 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:12:58.132920   31467 command_runner.go:130] > [certs] Using the existing "sa" key
	I0116 23:12:58.133369   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:12:58.185669   31467 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:12:58.307133   31467 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:12:58.534821   31467 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:12:58.911144   31467 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:12:59.188236   31467 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:12:59.191013   31467 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057619507s)
	I0116 23:12:59.191039   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:12:59.378298   31467 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:12:59.378327   31467 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:12:59.378341   31467 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 23:12:59.378368   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:12:59.456046   31467 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:12:59.456132   31467 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:12:59.458904   31467 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:12:59.460051   31467 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:12:59.461975   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:12:59.532386   31467 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:12:59.532540   31467 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:12:59.532641   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:00.033606   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:00.533514   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:01.033316   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:01.532976   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:01.560417   31467 command_runner.go:130] > 1074
	I0116 23:13:01.560697   31467 api_server.go:72] duration metric: took 2.028164751s to wait for apiserver process to appear ...
	I0116 23:13:01.560719   31467 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:13:01.560737   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:01.561384   31467 api_server.go:269] stopped: https://192.168.39.50:8443/healthz: Get "https://192.168.39.50:8443/healthz": dial tcp 192.168.39.50:8443: connect: connection refused
	I0116 23:13:02.061572   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:05.410192   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:13:05.410223   31467 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:13:05.410239   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:05.455336   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:13:05.455377   31467 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:13:05.561544   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:05.568883   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:13:05.568927   31467 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:13:06.061478   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:06.066344   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:13:06.066369   31467 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:13:06.560956   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:06.569532   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:13:06.569558   31467 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:13:07.061747   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:07.066966   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 23:13:07.067073   31467 round_trippers.go:463] GET https://192.168.39.50:8443/version
	I0116 23:13:07.067087   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:07.067099   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:07.067108   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:07.079734   31467 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0116 23:13:07.079757   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:07.079771   31467 round_trippers.go:580]     Audit-Id: 54dbbc66-1da2-4c61-a71d-cc645717c4ab
	I0116 23:13:07.079778   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:07.079785   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:07.079792   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:07.079800   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:07.079810   31467 round_trippers.go:580]     Content-Length: 264
	I0116 23:13:07.079822   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:07 GMT
	I0116 23:13:07.079848   31467 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 23:13:07.079923   31467 api_server.go:141] control plane version: v1.28.4
	I0116 23:13:07.079942   31467 api_server.go:131] duration metric: took 5.51921543s to wait for apiserver health ...
	I0116 23:13:07.079954   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:13:07.079969   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:13:07.081983   31467 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 23:13:07.083567   31467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 23:13:07.099224   31467 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 23:13:07.099251   31467 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 23:13:07.099261   31467 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 23:13:07.099271   31467 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:13:07.099281   31467 command_runner.go:130] > Access: 2024-01-16 23:12:33.865314209 +0000
	I0116 23:13:07.099294   31467 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 23:13:07.099307   31467 command_runner.go:130] > Change: 2024-01-16 23:12:32.165314209 +0000
	I0116 23:13:07.099313   31467 command_runner.go:130] >  Birth: -
	I0116 23:13:07.099373   31467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 23:13:07.099387   31467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 23:13:07.119772   31467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 23:13:08.134350   31467 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:13:08.134385   31467 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:13:08.134393   31467 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 23:13:08.134401   31467 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 23:13:08.134423   31467 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.014621536s)
	I0116 23:13:08.134452   31467 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:13:08.134578   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:08.134590   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.134601   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.134611   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.138536   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:08.138569   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.138580   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.138589   31467 round_trippers.go:580]     Audit-Id: 806cb992-b4c4-484a-b119-321a1aa52117
	I0116 23:13:08.138611   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.138619   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.138627   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.138635   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.139965   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83175 chars]
	I0116 23:13:08.143993   31467 system_pods.go:59] 12 kube-system pods found
	I0116 23:13:08.144029   31467 system_pods.go:61] "coredns-5dd5756b68-7lcpl" [2c5cd6ef-7b39-48aa-b234-13dda7343591] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:13:08.144036   31467 system_pods.go:61] "etcd-multinode-328490" [92c91283-c595-4eb5-af56-913835c6c778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:13:08.144042   31467 system_pods.go:61] "kindnet-7s7p2" [d5e4026d-cf51-44ae-9fd4-2467d26183a3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:08.144051   31467 system_pods.go:61] "kindnet-d8kbq" [8e64d242-68b1-44e4-8a88-fd54dae1863c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:08.144061   31467 system_pods.go:61] "kindnet-ngl9m" [7c9ef7d7-d303-4e94-8f22-2c26d29627a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:08.144073   31467 system_pods.go:61] "kube-apiserver-multinode-328490" [4deddb28-05c8-440a-8c76-f45eaa7c42d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:13:08.144091   31467 system_pods.go:61] "kube-controller-manager-multinode-328490" [46b93b7c-b6f2-4ef9-9cb9-395a154034b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:13:08.144099   31467 system_pods.go:61] "kube-proxy-6vmdk" [ba882fac-57b9-4e3a-afc5-09f016f542bf] Running
	I0116 23:13:08.144113   31467 system_pods.go:61] "kube-proxy-bqt7h" [8903f17c-7460-4896-826d-76d99335348d] Running
	I0116 23:13:08.144120   31467 system_pods.go:61] "kube-proxy-tc46j" [57831696-d514-4547-9f95-59ea41569c65] Running
	I0116 23:13:08.144125   31467 system_pods.go:61] "kube-scheduler-multinode-328490" [0f132072-d49d-46ed-a25f-526a38a74885] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:13:08.144134   31467 system_pods.go:61] "storage-provisioner" [a9895967-db72-4455-81be-1a2b274e3a42] Running
	I0116 23:13:08.144142   31467 system_pods.go:74] duration metric: took 9.679701ms to wait for pod list to return data ...
	I0116 23:13:08.144150   31467 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:13:08.144224   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 23:13:08.144234   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.144245   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.144255   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.146825   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:08.146875   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.146899   31467 round_trippers.go:580]     Audit-Id: cd496a02-fa68-4e58-af68-1725fd1f9c94
	I0116 23:13:08.146907   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.146929   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.146941   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.146949   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.146961   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.147238   31467 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16355 chars]
	I0116 23:13:08.148182   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:08.148206   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:08.148215   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:08.148219   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:08.148223   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:08.148227   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:08.148231   31467 node_conditions.go:105] duration metric: took 4.076467ms to run NodePressure ...
	I0116 23:13:08.148248   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:13:08.352856   31467 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 23:13:08.352891   31467 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 23:13:08.352923   31467 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:13:08.353043   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0116 23:13:08.353055   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.353066   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.353075   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.356197   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:08.356215   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.356222   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.356227   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.356232   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.356237   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.356242   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.356246   31467 round_trippers.go:580]     Audit-Id: b164fc5a-c2b8-40af-8e00-736b19a9f84b
	I0116 23:13:08.356710   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0116 23:13:08.357857   31467 kubeadm.go:787] kubelet initialised
	I0116 23:13:08.357878   31467 kubeadm.go:788] duration metric: took 4.946068ms waiting for restarted kubelet to initialise ...
	I0116 23:13:08.357886   31467 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:13:08.357935   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:08.357947   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.357953   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.357965   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.361480   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:08.361502   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.361512   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.361520   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.361528   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.361536   31467 round_trippers.go:580]     Audit-Id: fa904eda-20d9-48b8-a2ab-3796ce635a72
	I0116 23:13:08.361544   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.361552   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.363070   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83175 chars]
	I0116 23:13:08.365518   31467 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:08.365620   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:08.365630   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.365637   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.365645   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.367788   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:08.367808   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.367818   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.367826   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.367834   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.367842   31467 round_trippers.go:580]     Audit-Id: ce5ec397-2995-4a6d-a6a4-6e3bb99190e5
	I0116 23:13:08.367850   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.367857   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.368002   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:08.368471   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.368488   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.368495   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.368500   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.370250   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:08.370270   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.370280   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.370288   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.370297   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.370305   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.370313   31467 round_trippers.go:580]     Audit-Id: 74754adc-b97f-4212-bff4-c4ed9c6c97a0
	I0116 23:13:08.370322   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.370468   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:08.370889   31467 pod_ready.go:97] node "multinode-328490" hosting pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.370914   31467 pod_ready.go:81] duration metric: took 5.374977ms waiting for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:08.370925   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.370940   31467 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:08.370999   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:08.371010   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.371020   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.371032   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.372725   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:08.372742   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.372752   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.372761   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.372769   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.372779   31467 round_trippers.go:580]     Audit-Id: 3d78985b-b540-456d-9aeb-c2020ebf0788
	I0116 23:13:08.372791   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.372798   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.372916   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:08.373334   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.373352   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.373363   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.373372   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.375393   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:08.375411   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.375419   31467 round_trippers.go:580]     Audit-Id: 62cbbb68-febc-4b18-9044-65587d277914
	I0116 23:13:08.375428   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.375436   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.375444   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.375452   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.375462   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.375630   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:08.375993   31467 pod_ready.go:97] node "multinode-328490" hosting pod "etcd-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.376013   31467 pod_ready.go:81] duration metric: took 5.065583ms waiting for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:08.376023   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "etcd-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.376045   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:08.376108   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:13:08.376119   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.376135   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.376147   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.377851   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:08.377868   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.377877   31467 round_trippers.go:580]     Audit-Id: b75ae6c7-1cc1-4338-99e1-ec1b50541263
	I0116 23:13:08.377885   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.377900   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.377908   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.377919   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.377931   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.378085   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"800","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 23:13:08.378582   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.378596   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.378603   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.378608   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.380119   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:08.380133   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.380143   31467 round_trippers.go:580]     Audit-Id: 78e33bfd-05f5-4ccf-906c-fbc15d397849
	I0116 23:13:08.380157   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.380166   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.380179   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.380192   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.380204   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.380378   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:08.380736   31467 pod_ready.go:97] node "multinode-328490" hosting pod "kube-apiserver-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.380753   31467 pod_ready.go:81] duration metric: took 4.699475ms waiting for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:08.380764   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "kube-apiserver-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.380780   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:08.380827   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:08.380835   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.380842   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.380848   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.382523   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:08.382536   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.382545   31467 round_trippers.go:580]     Audit-Id: f0268a42-1150-4bd1-a70f-484469123001
	I0116 23:13:08.382553   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.382562   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.382572   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.382581   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.382593   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.382859   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"811","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 23:13:08.535494   31467 request.go:629] Waited for 152.165544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.535564   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.535571   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.535597   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.535625   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.538745   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:08.538771   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.538782   31467 round_trippers.go:580]     Audit-Id: 4c2abec0-7186-4ab2-aac5-527232b68e74
	I0116 23:13:08.538790   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.538797   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.538806   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.538815   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.538824   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.538984   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:08.539401   31467 pod_ready.go:97] node "multinode-328490" hosting pod "kube-controller-manager-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.539425   31467 pod_ready.go:81] duration metric: took 158.638973ms waiting for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:08.539434   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "kube-controller-manager-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.539441   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:08.734785   31467 request.go:629] Waited for 195.280126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:13:08.734869   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:13:08.734874   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.734885   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.734892   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.741404   31467 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 23:13:08.741433   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.741442   31467 round_trippers.go:580]     Audit-Id: 3d763a05-0f91-4ea7-9163-5d85e7264ce9
	I0116 23:13:08.741451   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.741462   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.741469   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.741478   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.741487   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.741606   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vmdk","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba882fac-57b9-4e3a-afc5-09f016f542bf","resourceVersion":"860","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 23:13:08.935148   31467 request.go:629] Waited for 193.079044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.935271   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:08.935282   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:08.935294   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:08.935312   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:08.938429   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:08.938465   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:08.938475   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:08 GMT
	I0116 23:13:08.938483   31467 round_trippers.go:580]     Audit-Id: eabf3e3f-815e-4764-bb2f-4fee192130e0
	I0116 23:13:08.938491   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:08.938499   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:08.938511   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:08.938520   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:08.938712   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:08.939025   31467 pod_ready.go:97] node "multinode-328490" hosting pod "kube-proxy-6vmdk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.939048   31467 pod_ready.go:81] duration metric: took 399.594032ms waiting for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:08.939061   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "kube-proxy-6vmdk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:08.939072   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:09.134968   31467 request.go:629] Waited for 195.830022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:13:09.135051   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:13:09.135058   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:09.135071   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:09.135082   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:09.137959   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:09.138003   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:09.138013   31467 round_trippers.go:580]     Audit-Id: 8b865506-8432-4434-9165-6181ba5ef4b5
	I0116 23:13:09.138020   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:09.138027   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:09.138035   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:09.138046   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:09.138056   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:09 GMT
	I0116 23:13:09.138397   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqt7h","generateName":"kube-proxy-","namespace":"kube-system","uid":"8903f17c-7460-4896-826d-76d99335348d","resourceVersion":"521","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 23:13:09.335245   31467 request.go:629] Waited for 196.396402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:13:09.335333   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:13:09.335338   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:09.335346   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:09.335355   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:09.338007   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:09.338027   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:09.338034   31467 round_trippers.go:580]     Audit-Id: 508a7c38-5131-49cb-9e8a-865a20605fbc
	I0116 23:13:09.338039   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:09.338053   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:09.338061   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:09.338070   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:09.338078   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:09 GMT
	I0116 23:13:09.338571   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"37500630-512c-4fdd-b9d7-a7a751761f39","resourceVersion":"856","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_05_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0116 23:13:09.338930   31467 pod_ready.go:92] pod "kube-proxy-bqt7h" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:09.338951   31467 pod_ready.go:81] duration metric: took 399.867069ms waiting for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:09.338964   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:09.534983   31467 request.go:629] Waited for 195.940626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:13:09.535079   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:13:09.535092   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:09.535104   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:09.535115   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:09.537554   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:09.537574   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:09.537585   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:09.537594   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:09.537603   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:09.537618   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:09 GMT
	I0116 23:13:09.537626   31467 round_trippers.go:580]     Audit-Id: b05cbee2-d3a3-42c7-a830-d66559f4e05f
	I0116 23:13:09.537638   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:09.537794   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tc46j","generateName":"kube-proxy-","namespace":"kube-system","uid":"57831696-d514-4547-9f95-59ea41569c65","resourceVersion":"727","creationTimestamp":"2024-01-16T23:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 23:13:09.734599   31467 request.go:629] Waited for 196.306442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:13:09.734688   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:13:09.734693   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:09.734701   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:09.734708   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:09.738303   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:09.738327   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:09.738343   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:09.738368   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:09.738377   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:09.738385   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:09 GMT
	I0116 23:13:09.738393   31467 round_trippers.go:580]     Audit-Id: 76d52527-b970-46d5-a651-947c3e5ddadd
	I0116 23:13:09.738400   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:09.738528   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m03","uid":"f19a8ad4-4a7f-4648-b320-7d48cffd62df","resourceVersion":"759","creationTimestamp":"2024-01-16T23:05:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_05_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:05:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0116 23:13:09.738911   31467 pod_ready.go:92] pod "kube-proxy-tc46j" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:09.738930   31467 pod_ready.go:81] duration metric: took 399.959256ms waiting for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:09.738946   31467 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:09.934928   31467 request.go:629] Waited for 195.909294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:13:09.935015   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:13:09.935023   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:09.935031   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:09.935037   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:09.937530   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:09.937551   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:09.937559   31467 round_trippers.go:580]     Audit-Id: e1e94d38-039c-4d61-a0ac-0684fd366db0
	I0116 23:13:09.937564   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:09.937569   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:09.937573   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:09.937578   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:09.937583   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:09 GMT
	I0116 23:13:09.937918   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-328490","namespace":"kube-system","uid":"0f132072-d49d-46ed-a25f-526a38a74885","resourceVersion":"816","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.mirror":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.seen":"2024-01-16T23:01:56.235892116Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0116 23:13:10.135547   31467 request.go:629] Waited for 197.242464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:10.135626   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:10.135634   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.135645   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.135657   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.137942   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:10.137965   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.137978   31467 round_trippers.go:580]     Audit-Id: a7debde4-6da6-4209-aa8d-9975b028d309
	I0116 23:13:10.137986   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.137994   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.138002   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.138010   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.138018   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.138200   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:10.138601   31467 pod_ready.go:97] node "multinode-328490" hosting pod "kube-scheduler-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:10.138623   31467 pod_ready.go:81] duration metric: took 399.670749ms waiting for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	E0116 23:13:10.138632   31467 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-328490" hosting pod "kube-scheduler-multinode-328490" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-328490" has status "Ready":"False"
	I0116 23:13:10.138640   31467 pod_ready.go:38] duration metric: took 1.780747558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:13:10.138662   31467 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:13:10.152173   31467 command_runner.go:130] > -16
	I0116 23:13:10.152262   31467 ops.go:34] apiserver oom_adj: -16
	I0116 23:13:10.152275   31467 kubeadm.go:640] restartCluster took 22.213785722s
	I0116 23:13:10.152288   31467 kubeadm.go:406] StartCluster complete in 22.257375092s
	I0116 23:13:10.152316   31467 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:13:10.152380   31467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:13:10.152954   31467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:13:10.153146   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:13:10.153165   31467 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:13:10.155126   31467 out.go:177] * Enabled addons: 
	I0116 23:13:10.153394   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:13:10.153404   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:13:10.156504   31467 addons.go:505] enable addons completed in 3.340605ms: enabled=[]
	I0116 23:13:10.156726   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:13:10.157006   31467 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 23:13:10.157015   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.157022   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.157031   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.159990   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:10.160005   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.160011   31467 round_trippers.go:580]     Audit-Id: 54857a70-efaf-4a6f-90d7-29d737098437
	I0116 23:13:10.160016   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.160021   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.160026   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.160031   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.160036   31467 round_trippers.go:580]     Content-Length: 291
	I0116 23:13:10.160043   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.160081   31467 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9e31c201-6ba7-47ab-b7c2-74a96553d8c6","resourceVersion":"862","creationTimestamp":"2024-01-16T23:01:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 23:13:10.160250   31467 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-328490" context rescaled to 1 replicas
	I0116 23:13:10.160281   31467 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:13:10.162297   31467 out.go:177] * Verifying Kubernetes components...
	I0116 23:13:10.163979   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:13:10.252989   31467 command_runner.go:130] > apiVersion: v1
	I0116 23:13:10.253007   31467 command_runner.go:130] > data:
	I0116 23:13:10.253016   31467 command_runner.go:130] >   Corefile: |
	I0116 23:13:10.253020   31467 command_runner.go:130] >     .:53 {
	I0116 23:13:10.253023   31467 command_runner.go:130] >         log
	I0116 23:13:10.253028   31467 command_runner.go:130] >         errors
	I0116 23:13:10.253034   31467 command_runner.go:130] >         health {
	I0116 23:13:10.253038   31467 command_runner.go:130] >            lameduck 5s
	I0116 23:13:10.253042   31467 command_runner.go:130] >         }
	I0116 23:13:10.253046   31467 command_runner.go:130] >         ready
	I0116 23:13:10.253051   31467 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 23:13:10.253056   31467 command_runner.go:130] >            pods insecure
	I0116 23:13:10.253061   31467 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 23:13:10.253068   31467 command_runner.go:130] >            ttl 30
	I0116 23:13:10.253074   31467 command_runner.go:130] >         }
	I0116 23:13:10.253080   31467 command_runner.go:130] >         prometheus :9153
	I0116 23:13:10.253087   31467 command_runner.go:130] >         hosts {
	I0116 23:13:10.253099   31467 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0116 23:13:10.253113   31467 command_runner.go:130] >            fallthrough
	I0116 23:13:10.253120   31467 command_runner.go:130] >         }
	I0116 23:13:10.253124   31467 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 23:13:10.253131   31467 command_runner.go:130] >            max_concurrent 1000
	I0116 23:13:10.253135   31467 command_runner.go:130] >         }
	I0116 23:13:10.253141   31467 command_runner.go:130] >         cache 30
	I0116 23:13:10.253152   31467 command_runner.go:130] >         loop
	I0116 23:13:10.253161   31467 command_runner.go:130] >         reload
	I0116 23:13:10.253166   31467 command_runner.go:130] >         loadbalance
	I0116 23:13:10.253175   31467 command_runner.go:130] >     }
	I0116 23:13:10.253183   31467 command_runner.go:130] > kind: ConfigMap
	I0116 23:13:10.253193   31467 command_runner.go:130] > metadata:
	I0116 23:13:10.253202   31467 command_runner.go:130] >   creationTimestamp: "2024-01-16T23:01:56Z"
	I0116 23:13:10.253212   31467 command_runner.go:130] >   name: coredns
	I0116 23:13:10.253219   31467 command_runner.go:130] >   namespace: kube-system
	I0116 23:13:10.253226   31467 command_runner.go:130] >   resourceVersion: "363"
	I0116 23:13:10.253231   31467 command_runner.go:130] >   uid: aa67df74-30d1-406e-a1c2-b69d125774a1
	I0116 23:13:10.255315   31467 node_ready.go:35] waiting up to 6m0s for node "multinode-328490" to be "Ready" ...
	I0116 23:13:10.255462   31467 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:13:10.334683   31467 request.go:629] Waited for 79.203856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:10.334757   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:10.334763   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.334770   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.334776   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.337338   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:10.337363   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.337373   31467 round_trippers.go:580]     Audit-Id: f5c18722-1541-4302-8a56-9a2a9f19605e
	I0116 23:13:10.337382   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.337390   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.337398   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.337406   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.337418   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.337554   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"760","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 23:13:10.756126   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:10.756151   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.756159   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.756165   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.758961   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:10.758989   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.759000   31467 round_trippers.go:580]     Audit-Id: 78debe28-9057-461b-b45e-2d88aa2ca499
	I0116 23:13:10.759008   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.759016   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.759024   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.759044   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.759054   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.759483   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:10.759810   31467 node_ready.go:49] node "multinode-328490" has status "Ready":"True"
	I0116 23:13:10.759825   31467 node_ready.go:38] duration metric: took 504.486222ms waiting for node "multinode-328490" to be "Ready" ...
	I0116 23:13:10.759834   31467 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:13:10.759894   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:10.759904   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.759911   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.759917   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.763639   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:10.763656   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.763663   31467 round_trippers.go:580]     Audit-Id: 58574b95-82ab-4db1-80f3-2034a5af3852
	I0116 23:13:10.763668   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.763674   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.763681   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.763701   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.763709   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.765170   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"872"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82925 chars]
	I0116 23:13:10.767618   31467 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:10.935064   31467 request.go:629] Waited for 167.352133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:10.935147   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:10.935155   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:10.935163   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:10.935170   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:10.937820   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:10.937846   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:10.937856   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:10.937863   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:10 GMT
	I0116 23:13:10.937871   31467 round_trippers.go:580]     Audit-Id: c77888af-68a1-471b-9aed-4469bf1ba253
	I0116 23:13:10.937879   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:10.937887   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:10.937900   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:10.938086   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:11.134989   31467 request.go:629] Waited for 196.41469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.135071   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.135077   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:11.135084   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:11.135090   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:11.137887   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:11.137912   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:11.137923   31467 round_trippers.go:580]     Audit-Id: 80c88e4d-d01c-48e6-8eba-28b90ddf1b66
	I0116 23:13:11.137932   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:11.137938   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:11.137946   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:11.137951   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:11.137956   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:11 GMT
	I0116 23:13:11.138121   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:11.334636   31467 request.go:629] Waited for 66.183345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:11.334698   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:11.334703   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:11.334711   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:11.334717   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:11.337810   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:11.337837   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:11.337847   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:11.337854   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:11.337862   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:11 GMT
	I0116 23:13:11.337869   31467 round_trippers.go:580]     Audit-Id: 107a5891-7a00-4107-9445-33ad570ca024
	I0116 23:13:11.337875   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:11.337883   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:11.338077   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:11.535017   31467 request.go:629] Waited for 196.388354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.535086   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.535091   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:11.535098   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:11.535104   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:11.537956   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:11.537983   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:11.537992   31467 round_trippers.go:580]     Audit-Id: deca41fe-4f82-4f12-818e-7d3b2e5d2c95
	I0116 23:13:11.537999   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:11.538007   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:11.538014   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:11.538022   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:11.538031   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:11 GMT
	I0116 23:13:11.538396   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:11.767794   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:11.767820   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:11.767828   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:11.767834   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:11.770703   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:11.770727   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:11.770739   31467 round_trippers.go:580]     Audit-Id: 33bbbda0-8fdd-415e-8352-928aa70a874c
	I0116 23:13:11.770748   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:11.770754   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:11.770762   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:11.770770   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:11.770784   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:11 GMT
	I0116 23:13:11.770920   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:11.934709   31467 request.go:629] Waited for 163.269019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.934795   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:11.934804   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:11.934815   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:11.934828   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:11.937641   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:11.937663   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:11.937672   31467 round_trippers.go:580]     Audit-Id: c7d85f8b-6d32-4d60-a04c-29778cb40621
	I0116 23:13:11.937680   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:11.937688   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:11.937695   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:11.937704   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:11.937714   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:11 GMT
	I0116 23:13:11.937873   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:12.267837   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:12.267863   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:12.267870   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:12.267876   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:12.271109   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:12.271131   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:12.271140   31467 round_trippers.go:580]     Audit-Id: 4113baff-2423-487b-b423-e209adba8ad3
	I0116 23:13:12.271149   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:12.271156   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:12.271163   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:12.271170   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:12.271178   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:12 GMT
	I0116 23:13:12.272064   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:12.334684   31467 request.go:629] Waited for 62.173175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:12.334767   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:12.334773   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:12.334781   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:12.334790   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:12.337159   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:12.337178   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:12.337188   31467 round_trippers.go:580]     Audit-Id: 9e2f243e-a2ac-47f4-8065-b65eaf831df2
	I0116 23:13:12.337196   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:12.337203   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:12.337210   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:12.337218   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:12.337225   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:12 GMT
	I0116 23:13:12.337494   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:12.768037   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:12.768062   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:12.768070   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:12.768076   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:12.770623   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:12.770646   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:12.770658   31467 round_trippers.go:580]     Audit-Id: 10480253-ca26-428d-bb8e-449fb0793603
	I0116 23:13:12.770667   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:12.770686   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:12.770694   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:12.770706   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:12.770715   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:12 GMT
	I0116 23:13:12.770900   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:12.771316   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:12.771328   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:12.771335   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:12.771342   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:12.773356   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:12.773372   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:12.773381   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:12.773388   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:12 GMT
	I0116 23:13:12.773396   31467 round_trippers.go:580]     Audit-Id: 6853b8dc-7b6e-411b-9f3f-771a0772bd42
	I0116 23:13:12.773405   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:12.773414   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:12.773431   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:12.773636   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:12.773939   31467 pod_ready.go:102] pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace has status "Ready":"False"
	I0116 23:13:13.268285   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:13.268313   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:13.268326   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:13.268335   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:13.271446   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:13.271469   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:13.271478   31467 round_trippers.go:580]     Audit-Id: c2f6fafd-9a74-4f9c-b9f5-e9ed9729f387
	I0116 23:13:13.271487   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:13.271494   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:13.271502   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:13.271510   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:13.271520   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:13 GMT
	I0116 23:13:13.272109   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:13.272535   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:13.272573   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:13.272584   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:13.272590   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:13.274512   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:13.274528   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:13.274538   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:13.274546   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:13.274555   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:13.274566   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:13 GMT
	I0116 23:13:13.274575   31467 round_trippers.go:580]     Audit-Id: 4299978a-27e0-4960-be8f-281a731f581d
	I0116 23:13:13.274585   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:13.274847   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:13.768633   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:13.768666   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:13.768678   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:13.768687   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:13.771871   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:13.771904   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:13.771914   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:13 GMT
	I0116 23:13:13.771930   31467 round_trippers.go:580]     Audit-Id: 1f033a8e-2bf6-4301-8869-05680b87248d
	I0116 23:13:13.771936   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:13.771941   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:13.771946   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:13.771951   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:13.772500   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:13.773017   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:13.773033   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:13.773041   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:13.773050   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:13.775650   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:13.775670   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:13.775680   31467 round_trippers.go:580]     Audit-Id: 40c730ca-ce42-4791-973b-0c6f50409e07
	I0116 23:13:13.775688   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:13.775695   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:13.775703   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:13.775711   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:13.775721   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:13 GMT
	I0116 23:13:13.775891   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:14.268767   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:14.268791   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.268803   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.268812   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.272441   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:14.272464   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.272481   31467 round_trippers.go:580]     Audit-Id: fbb81f2a-6ca9-4c1a-9af1-3dc6284e9f95
	I0116 23:13:14.272489   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.272496   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.272503   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.272511   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.272526   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.272693   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"823","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 23:13:14.273178   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:14.273197   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.273208   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.273217   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.279331   31467 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 23:13:14.279361   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.279381   31467 round_trippers.go:580]     Audit-Id: 7fc35988-a5b0-4096-9fd1-b394b1dcd536
	I0116 23:13:14.279390   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.279401   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.279410   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.279424   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.279431   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.279603   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:14.768210   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:13:14.768237   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.768245   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.768251   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.770786   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:14.770818   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.770830   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.770847   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.770855   31467 round_trippers.go:580]     Audit-Id: e9f8f535-b3e5-4829-8c2d-e35d39b447bb
	I0116 23:13:14.770863   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.770871   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.770886   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.771399   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 23:13:14.772029   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:14.772047   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.772057   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.772066   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.775701   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:14.775726   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.775736   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.775744   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.775752   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.775761   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.775768   31467 round_trippers.go:580]     Audit-Id: 78d2715d-9d89-44ce-b541-719864cc1735
	I0116 23:13:14.775775   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.776092   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:14.776421   31467 pod_ready.go:92] pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:14.776442   31467 pod_ready.go:81] duration metric: took 4.008805325s waiting for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:14.776451   31467 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:14.776499   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:14.776504   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.776511   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.776516   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.778931   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:14.778950   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.778960   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.778969   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.778980   31467 round_trippers.go:580]     Audit-Id: 0f98df28-776b-406e-aca5-f4689ed6c8aa
	I0116 23:13:14.778988   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.778998   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.779008   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.779175   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:14.779536   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:14.779549   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:14.779556   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:14.779561   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:14.781393   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:14.781412   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:14.781422   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:14.781430   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:14.781440   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:14.781449   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:14 GMT
	I0116 23:13:14.781463   31467 round_trippers.go:580]     Audit-Id: 0cb46a72-19a6-43e8-b5b5-2e6069dbd498
	I0116 23:13:14.781469   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:14.784862   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:15.276771   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:15.276800   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:15.276811   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:15.276820   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:15.279765   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:15.279794   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:15.279804   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:15.279818   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:15.279826   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:15 GMT
	I0116 23:13:15.279834   31467 round_trippers.go:580]     Audit-Id: bdeded02-bd9c-4e11-a3c6-dd058b1e8d57
	I0116 23:13:15.279843   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:15.279850   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:15.280668   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:15.281078   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:15.281093   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:15.281105   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:15.281114   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:15.283737   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:15.283768   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:15.283779   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:15 GMT
	I0116 23:13:15.283788   31467 round_trippers.go:580]     Audit-Id: ba08e08e-b62e-4549-b559-519d8b4d4c2b
	I0116 23:13:15.283796   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:15.283804   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:15.283812   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:15.283820   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:15.283981   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:15.776658   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:15.776687   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:15.776695   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:15.776702   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:15.779173   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:15.779192   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:15.779199   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:15.779205   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:15.779210   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:15.779216   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:15 GMT
	I0116 23:13:15.779224   31467 round_trippers.go:580]     Audit-Id: fe6b36fe-f8c2-4ba0-8506-27ef5ce07bd5
	I0116 23:13:15.779233   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:15.779459   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:15.779926   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:15.779940   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:15.779947   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:15.779953   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:15.782024   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:15.782040   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:15.782060   31467 round_trippers.go:580]     Audit-Id: f3d47936-857a-413c-a318-6e5956c2e98d
	I0116 23:13:15.782072   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:15.782082   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:15.782092   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:15.782101   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:15.782116   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:15 GMT
	I0116 23:13:15.782590   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:16.276924   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:16.276953   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:16.276966   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:16.276977   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:16.279693   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:16.279714   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:16.279725   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:16.279734   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:16.279743   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:16 GMT
	I0116 23:13:16.279752   31467 round_trippers.go:580]     Audit-Id: 4cc43343-300d-4d64-a4f6-fee59762b4dc
	I0116 23:13:16.279760   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:16.279767   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:16.279893   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:16.280382   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:16.280397   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:16.280405   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:16.280414   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:16.282598   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:16.282624   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:16.282633   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:16.282644   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:16.282653   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:16.282662   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:16 GMT
	I0116 23:13:16.282674   31467 round_trippers.go:580]     Audit-Id: 68119927-48ff-470c-881b-9a92a393d1cd
	I0116 23:13:16.282682   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:16.282789   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:16.777457   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:16.777480   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:16.777489   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:16.777495   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:16.779637   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:16.779663   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:16.779672   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:16.779681   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:16 GMT
	I0116 23:13:16.779689   31467 round_trippers.go:580]     Audit-Id: 1720ff78-f344-49b6-8866-59993cde1466
	I0116 23:13:16.779698   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:16.779706   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:16.779718   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:16.779876   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"807","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 23:13:16.780383   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:16.780402   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:16.780413   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:16.780424   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:16.782386   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:16.782407   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:16.782417   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:16.782425   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:16.782433   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:16.782441   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:16 GMT
	I0116 23:13:16.782453   31467 round_trippers.go:580]     Audit-Id: 8d51e1c3-7e2f-4ab9-bf6d-48dede7ece57
	I0116 23:13:16.782465   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:16.782613   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:16.783015   31467 pod_ready.go:102] pod "etcd-multinode-328490" in "kube-system" namespace has status "Ready":"False"
	I0116 23:13:17.276683   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:13:17.276705   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.276713   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.276719   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.279753   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:17.279780   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.279791   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.279799   31467 round_trippers.go:580]     Audit-Id: 2358f0ff-89ad-410c-891a-dd2f62787107
	I0116 23:13:17.279807   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.279815   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.279822   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.279831   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.280069   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"887","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 23:13:17.280481   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:17.280493   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.280500   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.280509   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.282730   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:17.282751   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.282761   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.282773   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.282784   31467 round_trippers.go:580]     Audit-Id: 5dccb569-2c1f-48b7-82e8-5ec99cc93382
	I0116 23:13:17.282795   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.282806   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.282818   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.282961   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:17.283334   31467 pod_ready.go:92] pod "etcd-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:17.283351   31467 pod_ready.go:81] duration metric: took 2.506895067s waiting for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:17.283376   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:17.283431   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:13:17.283438   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.283445   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.283451   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.285487   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:17.285508   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.285518   31467 round_trippers.go:580]     Audit-Id: 43ef6167-2ee1-452b-bbb1-6991c9d5d0d9
	I0116 23:13:17.285526   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.285534   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.285548   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.285561   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.285570   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.285964   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"800","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 23:13:17.286494   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:17.286511   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.286521   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.286533   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.288316   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:17.288334   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.288344   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.288353   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.288361   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.288367   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.288375   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.288386   31467 round_trippers.go:580]     Audit-Id: e2a214d7-ae2e-4149-802a-7cac06a231a6
	I0116 23:13:17.288535   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:17.784205   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:13:17.784227   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.784234   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.784240   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.786686   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:17.786711   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.786722   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.786730   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.786738   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.786747   31467 round_trippers.go:580]     Audit-Id: c435e928-47c8-4465-98bd-7371efeef39a
	I0116 23:13:17.786760   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.786769   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.786981   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"800","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 23:13:17.787406   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:17.787421   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:17.787431   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:17.787440   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:17.789890   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:17.789904   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:17.789910   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:17.789916   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:17.789931   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:17 GMT
	I0116 23:13:17.789938   31467 round_trippers.go:580]     Audit-Id: 26e11250-530a-4e01-a5c6-6da9846f3612
	I0116 23:13:17.789946   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:17.789959   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:17.790451   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:18.284296   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:13:18.284316   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.284328   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.284336   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.287927   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:18.287952   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.287963   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.287968   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.287974   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.287983   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.287991   31467 round_trippers.go:580]     Audit-Id: fd3b2035-f685-4058-9266-872bfba9f483
	I0116 23:13:18.288004   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.288143   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"800","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 23:13:18.288582   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:18.288596   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.288603   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.288612   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.291572   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:18.291590   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.291600   31467 round_trippers.go:580]     Audit-Id: 41321c5f-7cc0-435e-bd26-85758999dd01
	I0116 23:13:18.291609   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.291618   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.291626   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.291634   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.291646   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.291740   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:18.784443   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:13:18.784469   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.784477   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.784483   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.787067   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:18.787088   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.787097   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.787105   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.787113   31467 round_trippers.go:580]     Audit-Id: c81def61-da5d-4b05-806b-fd9a65062000
	I0116 23:13:18.787120   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.787127   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.787134   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.787331   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"900","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 23:13:18.787788   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:18.787803   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.787810   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.787816   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.789891   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:18.789910   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.789919   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.789926   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.789933   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.789941   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.789948   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.789955   31467 round_trippers.go:580]     Audit-Id: c79ff4cc-eaf0-4260-abc2-8b882ae57ed5
	I0116 23:13:18.790259   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:18.790582   31467 pod_ready.go:92] pod "kube-apiserver-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:18.790599   31467 pod_ready.go:81] duration metric: took 1.507211769s waiting for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:18.790607   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:18.790660   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:18.790671   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.790678   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.790686   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.793109   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:18.793135   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.793146   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.793157   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.793168   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.793177   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.793185   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.793198   31467 round_trippers.go:580]     Audit-Id: 3b25bb1d-d2c7-4ea7-a7f4-09cd59f091e8
	I0116 23:13:18.793885   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"811","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 23:13:18.794395   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:18.794412   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:18.794420   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:18.794427   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:18.797471   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:18.797491   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:18.797501   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:18.797510   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:18.797520   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:18 GMT
	I0116 23:13:18.797530   31467 round_trippers.go:580]     Audit-Id: e3a91658-f3d9-4aab-8af2-d4a41ae4190e
	I0116 23:13:18.797541   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:18.797555   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:18.798273   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:19.291361   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:19.291385   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:19.291392   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:19.291398   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:19.294141   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:19.294165   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:19.294175   31467 round_trippers.go:580]     Audit-Id: 2cbf9033-08ca-424d-8849-eb4f13992cd8
	I0116 23:13:19.294200   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:19.294208   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:19.294217   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:19.294224   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:19.294232   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:19 GMT
	I0116 23:13:19.294586   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"811","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 23:13:19.295040   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:19.295056   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:19.295066   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:19.295075   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:19.297042   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:19.297059   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:19.297067   31467 round_trippers.go:580]     Audit-Id: 8676271c-539a-432a-958f-8e7ef8095fc3
	I0116 23:13:19.297078   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:19.297084   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:19.297090   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:19.297098   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:19.297107   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:19 GMT
	I0116 23:13:19.297396   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:19.791026   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:19.791057   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:19.791068   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:19.791078   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:19.793771   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:19.793800   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:19.793810   31467 round_trippers.go:580]     Audit-Id: c6fdd630-d6b1-4f0d-bbbb-3fdfd9eacf28
	I0116 23:13:19.793819   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:19.793827   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:19.793836   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:19.793845   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:19.793850   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:19 GMT
	I0116 23:13:19.794238   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"811","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 23:13:19.794645   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:19.794662   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:19.794669   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:19.794675   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:19.796631   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:19.796649   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:19.796657   31467 round_trippers.go:580]     Audit-Id: 8d351f88-7223-45a8-b3d8-6dbdf0443170
	I0116 23:13:19.796665   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:19.796674   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:19.796683   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:19.796690   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:19.796698   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:19 GMT
	I0116 23:13:19.796993   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:20.291630   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:20.291653   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.291662   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.291668   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.294142   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:20.294162   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.294171   31467 round_trippers.go:580]     Audit-Id: b3cf4f03-fef0-4415-a486-bf7a7053c8e9
	I0116 23:13:20.294177   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.294186   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.294198   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.294209   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.294220   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.294508   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"811","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 23:13:20.294981   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:20.294996   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.295003   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.295009   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.296990   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:20.297004   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.297010   31467 round_trippers.go:580]     Audit-Id: 44ac3215-005d-4b67-9072-137902be6e69
	I0116 23:13:20.297016   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.297021   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.297026   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.297034   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.297039   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.297420   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:20.791071   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:13:20.791096   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.791115   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.791122   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.793917   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:20.793942   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.793951   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.793958   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.793966   31467 round_trippers.go:580]     Audit-Id: 695142ba-18b1-43a1-b0f1-0277394bcd4d
	I0116 23:13:20.793975   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.793982   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.793990   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.794351   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"901","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 23:13:20.794801   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:20.794817   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.794824   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.794830   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.798193   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:20.798208   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.798217   31467 round_trippers.go:580]     Audit-Id: 73cfdf83-d3af-4ece-929c-99c6aea95842
	I0116 23:13:20.798222   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.798227   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.798232   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.798237   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.798243   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.798967   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:20.799258   31467 pod_ready.go:92] pod "kube-controller-manager-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:20.799272   31467 pod_ready.go:81] duration metric: took 2.008659466s waiting for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:20.799283   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:20.799332   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:13:20.799339   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.799346   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.799351   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.801214   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:20.801232   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.801241   31467 round_trippers.go:580]     Audit-Id: b4fa4d58-d92b-44dc-9567-d97485e8f842
	I0116 23:13:20.801249   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.801256   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.801267   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.801275   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.801290   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.801440   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vmdk","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba882fac-57b9-4e3a-afc5-09f016f542bf","resourceVersion":"860","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 23:13:20.801919   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:20.801934   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.801941   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.801947   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.803879   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:20.803899   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.803909   31467 round_trippers.go:580]     Audit-Id: 063a7fc6-9ba4-4c57-b32f-24557e6def06
	I0116 23:13:20.803917   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.803925   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.803937   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.803945   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.803956   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.804373   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:20.804684   31467 pod_ready.go:92] pod "kube-proxy-6vmdk" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:20.804702   31467 pod_ready.go:81] duration metric: took 5.412954ms waiting for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:20.804713   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:20.935096   31467 request.go:629] Waited for 130.317691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:13:20.935167   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:13:20.935175   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:20.935186   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:20.935198   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:20.938667   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:20.938694   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:20.938703   31467 round_trippers.go:580]     Audit-Id: 5822c573-5114-4fcd-ae6f-107a886243c2
	I0116 23:13:20.938712   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:20.938720   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:20.938728   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:20.938736   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:20.938743   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:20 GMT
	I0116 23:13:20.939326   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqt7h","generateName":"kube-proxy-","namespace":"kube-system","uid":"8903f17c-7460-4896-826d-76d99335348d","resourceVersion":"521","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 23:13:21.135146   31467 request.go:629] Waited for 195.408028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:13:21.135213   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:13:21.135225   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.135235   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.135245   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.137939   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:21.137960   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.137967   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.137972   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.137977   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.137982   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.137988   31467 round_trippers.go:580]     Audit-Id: 8b80d755-e37d-4ad7-87ae-0477e4baa840
	I0116 23:13:21.137995   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.138131   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"37500630-512c-4fdd-b9d7-a7a751761f39","resourceVersion":"856","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_05_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0116 23:13:21.138504   31467 pod_ready.go:92] pod "kube-proxy-bqt7h" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:21.138525   31467 pod_ready.go:81] duration metric: took 333.798343ms waiting for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:21.138538   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:21.334614   31467 request.go:629] Waited for 196.013211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:13:21.334694   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:13:21.334703   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.334716   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.334730   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.337470   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:21.337497   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.337506   31467 round_trippers.go:580]     Audit-Id: a2924bcb-ff87-49f7-ab4f-010323a94960
	I0116 23:13:21.337513   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.337520   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.337527   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.337535   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.337543   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.337728   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tc46j","generateName":"kube-proxy-","namespace":"kube-system","uid":"57831696-d514-4547-9f95-59ea41569c65","resourceVersion":"727","creationTimestamp":"2024-01-16T23:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 23:13:21.535613   31467 request.go:629] Waited for 197.437037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:13:21.535685   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:13:21.535692   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.535703   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.535714   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.538187   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:21.538217   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.538228   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.538236   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.538242   31467 round_trippers.go:580]     Audit-Id: e3370ac1-9b94-4846-8600-ce4b3969d800
	I0116 23:13:21.538247   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.538253   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.538257   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.538682   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m03","uid":"f19a8ad4-4a7f-4648-b320-7d48cffd62df","resourceVersion":"759","creationTimestamp":"2024-01-16T23:05:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_05_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:05:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0116 23:13:21.538974   31467 pod_ready.go:92] pod "kube-proxy-tc46j" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:21.538990   31467 pod_ready.go:81] duration metric: took 400.444007ms waiting for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:21.538999   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:21.735143   31467 request.go:629] Waited for 196.069933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:13:21.735217   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:13:21.735224   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.735236   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.735245   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.738376   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:21.738399   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.738406   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.738412   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.738418   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.738424   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.738432   31467 round_trippers.go:580]     Audit-Id: 16f1deb2-d90b-48dd-ba4f-34651c62845f
	I0116 23:13:21.738441   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.738836   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-328490","namespace":"kube-system","uid":"0f132072-d49d-46ed-a25f-526a38a74885","resourceVersion":"893","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.mirror":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.seen":"2024-01-16T23:01:56.235892116Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 23:13:21.935602   31467 request.go:629] Waited for 196.414142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:21.935675   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:13:21.935682   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.935694   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.935704   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.938480   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:21.938504   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.938518   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.938532   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.938540   31467 round_trippers.go:580]     Audit-Id: 1a8b0759-8ffc-4a7a-88b6-c10fbd24946f
	I0116 23:13:21.938561   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.938574   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.938584   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.938795   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 23:13:21.939102   31467 pod_ready.go:92] pod "kube-scheduler-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:13:21.939116   31467 pod_ready.go:81] duration metric: took 400.110293ms waiting for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:13:21.939126   31467 pod_ready.go:38] duration metric: took 11.179279571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:13:21.939141   31467 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:13:21.939188   31467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:13:21.952444   31467 command_runner.go:130] > 1074
	I0116 23:13:21.952659   31467 api_server.go:72] duration metric: took 11.792343599s to wait for apiserver process to appear ...
	I0116 23:13:21.952673   31467 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:13:21.952690   31467 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:13:21.957954   31467 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 23:13:21.958012   31467 round_trippers.go:463] GET https://192.168.39.50:8443/version
	I0116 23:13:21.958020   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:21.958027   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:21.958033   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:21.959233   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:13:21.959254   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:21.959264   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:21 GMT
	I0116 23:13:21.959275   31467 round_trippers.go:580]     Audit-Id: e2fabf13-da24-4b66-938d-f2f021e1512b
	I0116 23:13:21.959287   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:21.959300   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:21.959312   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:21.959324   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:21.959336   31467 round_trippers.go:580]     Content-Length: 264
	I0116 23:13:21.959361   31467 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 23:13:21.959412   31467 api_server.go:141] control plane version: v1.28.4
	I0116 23:13:21.959429   31467 api_server.go:131] duration metric: took 6.749323ms to wait for apiserver health ...
	I0116 23:13:21.959440   31467 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:13:22.134963   31467 request.go:629] Waited for 175.449083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:22.135046   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:22.135058   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:22.135070   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:22.135083   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:22.140440   31467 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 23:13:22.140459   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:22.140468   31467 round_trippers.go:580]     Audit-Id: 3fe8d6b4-eff8-4636-9751-2b4920b513e9
	I0116 23:13:22.140475   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:22.140483   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:22.140490   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:22.140497   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:22.140505   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:22 GMT
	I0116 23:13:22.142055   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I0116 23:13:22.145395   31467 system_pods.go:59] 12 kube-system pods found
	I0116 23:13:22.145419   31467 system_pods.go:61] "coredns-5dd5756b68-7lcpl" [2c5cd6ef-7b39-48aa-b234-13dda7343591] Running
	I0116 23:13:22.145423   31467 system_pods.go:61] "etcd-multinode-328490" [92c91283-c595-4eb5-af56-913835c6c778] Running
	I0116 23:13:22.145428   31467 system_pods.go:61] "kindnet-7s7p2" [d5e4026d-cf51-44ae-9fd4-2467d26183a3] Running
	I0116 23:13:22.145433   31467 system_pods.go:61] "kindnet-d8kbq" [8e64d242-68b1-44e4-8a88-fd54dae1863c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:22.145441   31467 system_pods.go:61] "kindnet-ngl9m" [7c9ef7d7-d303-4e94-8f22-2c26d29627a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:22.145449   31467 system_pods.go:61] "kube-apiserver-multinode-328490" [4deddb28-05c8-440a-8c76-f45eaa7c42d9] Running
	I0116 23:13:22.145454   31467 system_pods.go:61] "kube-controller-manager-multinode-328490" [46b93b7c-b6f2-4ef9-9cb9-395a154034b0] Running
	I0116 23:13:22.145464   31467 system_pods.go:61] "kube-proxy-6vmdk" [ba882fac-57b9-4e3a-afc5-09f016f542bf] Running
	I0116 23:13:22.145468   31467 system_pods.go:61] "kube-proxy-bqt7h" [8903f17c-7460-4896-826d-76d99335348d] Running
	I0116 23:13:22.145472   31467 system_pods.go:61] "kube-proxy-tc46j" [57831696-d514-4547-9f95-59ea41569c65] Running
	I0116 23:13:22.145477   31467 system_pods.go:61] "kube-scheduler-multinode-328490" [0f132072-d49d-46ed-a25f-526a38a74885] Running
	I0116 23:13:22.145482   31467 system_pods.go:61] "storage-provisioner" [a9895967-db72-4455-81be-1a2b274e3a42] Running
	I0116 23:13:22.145491   31467 system_pods.go:74] duration metric: took 186.042048ms to wait for pod list to return data ...
	I0116 23:13:22.145500   31467 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:13:22.334923   31467 request.go:629] Waited for 189.356779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 23:13:22.335007   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/default/serviceaccounts
	I0116 23:13:22.335014   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:22.335025   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:22.335040   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:22.337774   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:13:22.337798   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:22.337808   31467 round_trippers.go:580]     Audit-Id: 2b19e257-d14a-4eb2-869f-cb069c98c0b8
	I0116 23:13:22.337816   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:22.337824   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:22.337831   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:22.337843   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:22.337851   31467 round_trippers.go:580]     Content-Length: 261
	I0116 23:13:22.337866   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:22 GMT
	I0116 23:13:22.337890   31467 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7e2499f3-4f80-4504-9e31-554411039785","resourceVersion":"335","creationTimestamp":"2024-01-16T23:02:08Z"}}]}
	I0116 23:13:22.338118   31467 default_sa.go:45] found service account: "default"
	I0116 23:13:22.338138   31467 default_sa.go:55] duration metric: took 192.631227ms for default service account to be created ...
	I0116 23:13:22.338146   31467 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:13:22.535580   31467 request.go:629] Waited for 197.378829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:22.535657   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:13:22.535664   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:22.535674   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:22.535684   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:22.540123   31467 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 23:13:22.540145   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:22.540153   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:22.540158   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:22.540164   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:22 GMT
	I0116 23:13:22.540169   31467 round_trippers.go:580]     Audit-Id: ebba1cc2-576f-4143-85d2-b220e181b2a2
	I0116 23:13:22.540174   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:22.540182   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:22.541659   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I0116 23:13:22.544267   31467 system_pods.go:86] 12 kube-system pods found
	I0116 23:13:22.544294   31467 system_pods.go:89] "coredns-5dd5756b68-7lcpl" [2c5cd6ef-7b39-48aa-b234-13dda7343591] Running
	I0116 23:13:22.544303   31467 system_pods.go:89] "etcd-multinode-328490" [92c91283-c595-4eb5-af56-913835c6c778] Running
	I0116 23:13:22.544310   31467 system_pods.go:89] "kindnet-7s7p2" [d5e4026d-cf51-44ae-9fd4-2467d26183a3] Running
	I0116 23:13:22.544319   31467 system_pods.go:89] "kindnet-d8kbq" [8e64d242-68b1-44e4-8a88-fd54dae1863c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:22.544331   31467 system_pods.go:89] "kindnet-ngl9m" [7c9ef7d7-d303-4e94-8f22-2c26d29627a9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 23:13:22.544345   31467 system_pods.go:89] "kube-apiserver-multinode-328490" [4deddb28-05c8-440a-8c76-f45eaa7c42d9] Running
	I0116 23:13:22.544354   31467 system_pods.go:89] "kube-controller-manager-multinode-328490" [46b93b7c-b6f2-4ef9-9cb9-395a154034b0] Running
	I0116 23:13:22.544365   31467 system_pods.go:89] "kube-proxy-6vmdk" [ba882fac-57b9-4e3a-afc5-09f016f542bf] Running
	I0116 23:13:22.544375   31467 system_pods.go:89] "kube-proxy-bqt7h" [8903f17c-7460-4896-826d-76d99335348d] Running
	I0116 23:13:22.544385   31467 system_pods.go:89] "kube-proxy-tc46j" [57831696-d514-4547-9f95-59ea41569c65] Running
	I0116 23:13:22.544393   31467 system_pods.go:89] "kube-scheduler-multinode-328490" [0f132072-d49d-46ed-a25f-526a38a74885] Running
	I0116 23:13:22.544403   31467 system_pods.go:89] "storage-provisioner" [a9895967-db72-4455-81be-1a2b274e3a42] Running
	I0116 23:13:22.544413   31467 system_pods.go:126] duration metric: took 206.259837ms to wait for k8s-apps to be running ...
	I0116 23:13:22.544425   31467 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:13:22.544480   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:13:22.558460   31467 system_svc.go:56] duration metric: took 14.02804ms WaitForService to wait for kubelet.
	I0116 23:13:22.558488   31467 kubeadm.go:581] duration metric: took 12.398174186s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:13:22.558513   31467 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:13:22.734986   31467 request.go:629] Waited for 176.388503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 23:13:22.735040   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 23:13:22.735044   31467 round_trippers.go:469] Request Headers:
	I0116 23:13:22.735058   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:13:22.735067   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:13:22.738373   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:13:22.738399   31467 round_trippers.go:577] Response Headers:
	I0116 23:13:22.738407   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:13:22 GMT
	I0116 23:13:22.738413   31467 round_trippers.go:580]     Audit-Id: b8a678fb-ce84-498e-9d33-c01c5d197885
	I0116 23:13:22.738418   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:13:22.738427   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:13:22.738435   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:13:22.738447   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:13:22.738719   31467 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"872","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0116 23:13:22.739313   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:22.739332   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:22.739341   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:22.739346   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:22.739349   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:13:22.739353   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:13:22.739356   31467 node_conditions.go:105] duration metric: took 180.838219ms to run NodePressure ...
	I0116 23:13:22.739366   31467 start.go:228] waiting for startup goroutines ...
	I0116 23:13:22.739374   31467 start.go:233] waiting for cluster config update ...
	I0116 23:13:22.739380   31467 start.go:242] writing updated cluster config ...
	I0116 23:13:22.739806   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:13:22.739884   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:13:22.743413   31467 out.go:177] * Starting worker node multinode-328490-m02 in cluster multinode-328490
	I0116 23:13:22.744905   31467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:13:22.744931   31467 cache.go:56] Caching tarball of preloaded images
	I0116 23:13:22.745025   31467 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:13:22.745035   31467 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:13:22.745144   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:13:22.745381   31467 start.go:365] acquiring machines lock for multinode-328490-m02: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:13:22.745431   31467 start.go:369] acquired machines lock for "multinode-328490-m02" in 27.656µs
	I0116 23:13:22.745446   31467 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:13:22.745453   31467 fix.go:54] fixHost starting: m02
	I0116 23:13:22.745724   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:13:22.745756   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:13:22.759664   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43913
	I0116 23:13:22.760025   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:13:22.760434   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:13:22.760456   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:13:22.760829   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:13:22.761033   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:13:22.761193   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetState
	I0116 23:13:22.762930   31467 fix.go:102] recreateIfNeeded on multinode-328490-m02: state=Running err=<nil>
	W0116 23:13:22.762951   31467 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:13:22.764999   31467 out.go:177] * Updating the running kvm2 "multinode-328490-m02" VM ...
	I0116 23:13:22.766563   31467 machine.go:88] provisioning docker machine ...
	I0116 23:13:22.766582   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:13:22.766786   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetMachineName
	I0116 23:13:22.766919   31467 buildroot.go:166] provisioning hostname "multinode-328490-m02"
	I0116 23:13:22.766938   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetMachineName
	I0116 23:13:22.767083   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:13:22.769406   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:22.769808   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:22.769836   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:22.770008   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:13:22.770170   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:22.770285   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:22.770408   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:13:22.770568   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:13:22.770862   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0116 23:13:22.770875   31467 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328490-m02 && echo "multinode-328490-m02" | sudo tee /etc/hostname
	I0116 23:13:22.897343   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328490-m02
	
	I0116 23:13:22.897367   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:13:22.900095   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:22.900468   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:22.900502   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:22.900668   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:13:22.900857   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:22.901036   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:22.901179   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:13:22.901333   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:13:22.901637   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0116 23:13:22.901655   31467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328490-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328490-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328490-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:13:23.011426   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:13:23.011460   31467 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:13:23.011484   31467 buildroot.go:174] setting up certificates
	I0116 23:13:23.011496   31467 provision.go:83] configureAuth start
	I0116 23:13:23.011509   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetMachineName
	I0116 23:13:23.011813   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetIP
	I0116 23:13:23.014849   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.015174   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:23.015200   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.015385   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:13:23.017739   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.018098   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:23.018129   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.018283   31467 provision.go:138] copyHostCerts
	I0116 23:13:23.018319   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:13:23.018414   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:13:23.018439   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:13:23.018531   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:13:23.018649   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:13:23.018681   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:13:23.018691   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:13:23.018730   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:13:23.018791   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:13:23.018815   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:13:23.018824   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:13:23.018854   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:13:23.018916   31467 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.multinode-328490-m02 san=[192.168.39.152 192.168.39.152 localhost 127.0.0.1 minikube multinode-328490-m02]
	I0116 23:13:23.187375   31467 provision.go:172] copyRemoteCerts
	I0116 23:13:23.187425   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:13:23.187446   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:13:23.190148   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.190488   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:23.190524   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.190696   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:13:23.190879   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:23.191037   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:13:23.191126   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m02/id_rsa Username:docker}
	I0116 23:13:23.275205   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 23:13:23.275294   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:13:23.296730   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 23:13:23.296798   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 23:13:23.317716   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 23:13:23.317797   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:13:23.339358   31467 provision.go:86] duration metric: configureAuth took 327.851505ms
	I0116 23:13:23.339383   31467 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:13:23.339591   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:13:23.339658   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:13:23.342661   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.343028   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:13:23.343058   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:13:23.343224   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:13:23.343431   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:23.343637   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:13:23.343785   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:13:23.343944   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:13:23.344249   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0116 23:13:23.344264   31467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:14:53.842797   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:14:53.842827   31467 machine.go:91] provisioned docker machine in 1m31.076247973s
	I0116 23:14:53.842838   31467 start.go:300] post-start starting for "multinode-328490-m02" (driver="kvm2")
	I0116 23:14:53.842849   31467 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:14:53.842866   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:14:53.843213   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:14:53.843241   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:14:53.846193   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:53.846616   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:53.846649   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:53.846772   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:14:53.846936   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:14:53.847116   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:14:53.847272   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m02/id_rsa Username:docker}
	I0116 23:14:53.933446   31467 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:14:53.937496   31467 command_runner.go:130] > NAME=Buildroot
	I0116 23:14:53.937522   31467 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 23:14:53.937529   31467 command_runner.go:130] > ID=buildroot
	I0116 23:14:53.937536   31467 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 23:14:53.937541   31467 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 23:14:53.937578   31467 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:14:53.937597   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:14:53.937675   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:14:53.937772   31467 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:14:53.937785   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /etc/ssl/certs/149302.pem
	I0116 23:14:53.937891   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:14:53.947741   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:14:53.970143   31467 start.go:303] post-start completed in 127.28819ms
	I0116 23:14:53.970172   31467 fix.go:56] fixHost completed within 1m31.224719807s
	I0116 23:14:53.970195   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:14:53.972740   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:53.973083   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:53.973115   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:53.973278   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:14:53.973455   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:14:53.973629   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:14:53.973745   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:14:53.973880   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:14:53.974198   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0116 23:14:53.974209   31467 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:14:54.082729   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705446894.071130505
	
	I0116 23:14:54.082752   31467 fix.go:206] guest clock: 1705446894.071130505
	I0116 23:14:54.082760   31467 fix.go:219] Guest: 2024-01-16 23:14:54.071130505 +0000 UTC Remote: 2024-01-16 23:14:53.970176542 +0000 UTC m=+450.125496678 (delta=100.953963ms)
	I0116 23:14:54.082779   31467 fix.go:190] guest clock delta is within tolerance: 100.953963ms
	I0116 23:14:54.082785   31467 start.go:83] releasing machines lock for "multinode-328490-m02", held for 1m31.337344337s
	I0116 23:14:54.082811   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:14:54.083026   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetIP
	I0116 23:14:54.085514   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.085949   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:54.085972   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.087870   31467 out.go:177] * Found network options:
	I0116 23:14:54.089315   31467 out.go:177]   - NO_PROXY=192.168.39.50
	W0116 23:14:54.090805   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 23:14:54.090835   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:14:54.091343   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:14:54.091514   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:14:54.091610   31467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:14:54.091653   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	W0116 23:14:54.091678   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 23:14:54.091740   31467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:14:54.091757   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:14:54.094356   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.094384   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.094790   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:54.094824   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.094855   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:54.094877   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:54.094947   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:14:54.095137   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:14:54.095152   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:14:54.095328   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:14:54.095361   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:14:54.095503   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:14:54.095508   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m02/id_rsa Username:docker}
	I0116 23:14:54.095643   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m02/id_rsa Username:docker}
	I0116 23:14:54.212800   31467 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 23:14:54.327611   31467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 23:14:54.332650   31467 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 23:14:54.332884   31467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:14:54.332941   31467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:14:54.340642   31467 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 23:14:54.340665   31467 start.go:475] detecting cgroup driver to use...
	I0116 23:14:54.340738   31467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:14:54.353460   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:14:54.365833   31467 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:14:54.365898   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:14:54.377725   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:14:54.389644   31467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:14:54.532075   31467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:14:54.649963   31467 docker.go:233] disabling docker service ...
	I0116 23:14:54.650035   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:14:54.662931   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:14:54.678356   31467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:14:54.845003   31467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:14:54.970030   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:14:54.982819   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:14:54.999276   31467 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 23:14:54.999313   31467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:14:54.999369   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:14:55.015507   31467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:14:55.015594   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:14:55.025066   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:14:55.034473   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:14:55.044229   31467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:14:55.054564   31467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:14:55.063286   31467 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 23:14:55.063537   31467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:14:55.072900   31467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:14:55.189155   31467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:14:58.815642   31467 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.626447461s)
	I0116 23:14:58.815672   31467 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:14:58.815726   31467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:14:58.820381   31467 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 23:14:58.820404   31467 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 23:14:58.820411   31467 command_runner.go:130] > Device: 16h/22d	Inode: 1223        Links: 1
	I0116 23:14:58.820418   31467 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:14:58.820423   31467 command_runner.go:130] > Access: 2024-01-16 23:14:58.722276574 +0000
	I0116 23:14:58.820429   31467 command_runner.go:130] > Modify: 2024-01-16 23:14:58.722276574 +0000
	I0116 23:14:58.820434   31467 command_runner.go:130] > Change: 2024-01-16 23:14:58.722276574 +0000
	I0116 23:14:58.820438   31467 command_runner.go:130] >  Birth: -
	I0116 23:14:58.820528   31467 start.go:543] Will wait 60s for crictl version
	I0116 23:14:58.820579   31467 ssh_runner.go:195] Run: which crictl
	I0116 23:14:58.824054   31467 command_runner.go:130] > /usr/bin/crictl
	I0116 23:14:58.824254   31467 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:14:58.859844   31467 command_runner.go:130] > Version:  0.1.0
	I0116 23:14:58.859870   31467 command_runner.go:130] > RuntimeName:  cri-o
	I0116 23:14:58.859878   31467 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 23:14:58.859887   31467 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 23:14:58.860963   31467 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:14:58.861049   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:14:58.908746   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:14:58.908773   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:14:58.908790   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:14:58.908798   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:14:58.908807   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:14:58.908815   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:14:58.908825   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:14:58.908836   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:14:58.908847   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:14:58.908862   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:14:58.908875   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:14:58.908886   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:14:58.910216   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:14:58.949959   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:14:58.949984   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:14:58.949994   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:14:58.949998   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:14:58.950004   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:14:58.950009   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:14:58.950013   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:14:58.950018   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:14:58.950023   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:14:58.950030   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:14:58.950035   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:14:58.950039   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:14:58.952005   31467 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:14:58.953539   31467 out.go:177]   - env NO_PROXY=192.168.39.50
	I0116 23:14:58.954806   31467 main.go:141] libmachine: (multinode-328490-m02) Calling .GetIP
	I0116 23:14:58.957580   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:58.957930   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:14:58.957960   31467 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:14:58.958186   31467 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:14:58.962244   31467 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 23:14:58.962276   31467 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490 for IP: 192.168.39.152
	I0116 23:14:58.962289   31467 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:14:58.962430   31467 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:14:58.962465   31467 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:14:58.962476   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 23:14:58.962488   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 23:14:58.962506   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 23:14:58.962520   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 23:14:58.962587   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:14:58.962630   31467 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:14:58.962654   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:14:58.962697   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:14:58.962733   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:14:58.962765   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:14:58.962808   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:14:58.962833   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /usr/share/ca-certificates/149302.pem
	I0116 23:14:58.962846   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:14:58.962858   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem -> /usr/share/ca-certificates/14930.pem
	I0116 23:14:58.963170   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:14:58.984891   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:14:59.005541   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:14:59.026007   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:14:59.045422   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:14:59.065465   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:14:59.085584   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:14:59.106879   31467 ssh_runner.go:195] Run: openssl version
	I0116 23:14:59.111810   31467 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 23:14:59.112064   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:14:59.121878   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:14:59.127217   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:14:59.127278   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:14:59.127327   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:14:59.131954   31467 command_runner.go:130] > 51391683
	I0116 23:14:59.132191   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:14:59.140347   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:14:59.149888   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:14:59.154114   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:14:59.154137   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:14:59.154171   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:14:59.159051   31467 command_runner.go:130] > 3ec20f2e
	I0116 23:14:59.159212   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:14:59.167419   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:14:59.177060   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:14:59.181144   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:14:59.181263   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:14:59.181311   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:14:59.186072   31467 command_runner.go:130] > b5213941
	I0116 23:14:59.186311   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:14:59.195856   31467 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:14:59.199890   31467 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 23:14:59.199918   31467 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 23:14:59.199981   31467 ssh_runner.go:195] Run: crio config
	I0116 23:14:59.242069   31467 command_runner.go:130] ! time="2024-01-16 23:14:59.230451756Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 23:14:59.242094   31467 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 23:14:59.255925   31467 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 23:14:59.255956   31467 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 23:14:59.255968   31467 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 23:14:59.255974   31467 command_runner.go:130] > #
	I0116 23:14:59.255985   31467 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 23:14:59.255991   31467 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 23:14:59.256001   31467 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 23:14:59.256010   31467 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 23:14:59.256031   31467 command_runner.go:130] > # reload'.
	I0116 23:14:59.256041   31467 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 23:14:59.256052   31467 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 23:14:59.256066   31467 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 23:14:59.256076   31467 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 23:14:59.256082   31467 command_runner.go:130] > [crio]
	I0116 23:14:59.256088   31467 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 23:14:59.256101   31467 command_runner.go:130] > # containers images, in this directory.
	I0116 23:14:59.256112   31467 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 23:14:59.256131   31467 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 23:14:59.256143   31467 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 23:14:59.256156   31467 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 23:14:59.256169   31467 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 23:14:59.256179   31467 command_runner.go:130] > storage_driver = "overlay"
	I0116 23:14:59.256188   31467 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 23:14:59.256200   31467 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 23:14:59.256210   31467 command_runner.go:130] > storage_option = [
	I0116 23:14:59.256218   31467 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 23:14:59.256227   31467 command_runner.go:130] > ]
	I0116 23:14:59.256237   31467 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 23:14:59.256251   31467 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 23:14:59.256261   31467 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 23:14:59.256271   31467 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 23:14:59.256283   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 23:14:59.256293   31467 command_runner.go:130] > # always happen on a node reboot
	I0116 23:14:59.256303   31467 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 23:14:59.256317   31467 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 23:14:59.256327   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 23:14:59.256346   31467 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 23:14:59.256356   31467 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 23:14:59.256370   31467 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 23:14:59.256380   31467 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 23:14:59.256387   31467 command_runner.go:130] > # internal_wipe = true
	I0116 23:14:59.256392   31467 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 23:14:59.256401   31467 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 23:14:59.256407   31467 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 23:14:59.256413   31467 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 23:14:59.256422   31467 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 23:14:59.256426   31467 command_runner.go:130] > [crio.api]
	I0116 23:14:59.256434   31467 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 23:14:59.256439   31467 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 23:14:59.256446   31467 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 23:14:59.256451   31467 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 23:14:59.256460   31467 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 23:14:59.256467   31467 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 23:14:59.256472   31467 command_runner.go:130] > # stream_port = "0"
	I0116 23:14:59.256477   31467 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 23:14:59.256481   31467 command_runner.go:130] > # stream_enable_tls = false
	I0116 23:14:59.256486   31467 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 23:14:59.256490   31467 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 23:14:59.256499   31467 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 23:14:59.256528   31467 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 23:14:59.256541   31467 command_runner.go:130] > # minutes.
	I0116 23:14:59.256547   31467 command_runner.go:130] > # stream_tls_cert = ""
	I0116 23:14:59.256557   31467 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 23:14:59.256570   31467 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 23:14:59.256580   31467 command_runner.go:130] > # stream_tls_key = ""
	I0116 23:14:59.256589   31467 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 23:14:59.256602   31467 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 23:14:59.256612   31467 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 23:14:59.256617   31467 command_runner.go:130] > # stream_tls_ca = ""
	I0116 23:14:59.256628   31467 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:14:59.256635   31467 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 23:14:59.256642   31467 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:14:59.256649   31467 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 23:14:59.256676   31467 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 23:14:59.256684   31467 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 23:14:59.256688   31467 command_runner.go:130] > [crio.runtime]
	I0116 23:14:59.256694   31467 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 23:14:59.256702   31467 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 23:14:59.256706   31467 command_runner.go:130] > # "nofile=1024:2048"
	I0116 23:14:59.256715   31467 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 23:14:59.256719   31467 command_runner.go:130] > # default_ulimits = [
	I0116 23:14:59.256726   31467 command_runner.go:130] > # ]
	I0116 23:14:59.256732   31467 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 23:14:59.256738   31467 command_runner.go:130] > # no_pivot = false
	I0116 23:14:59.256743   31467 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 23:14:59.256750   31467 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 23:14:59.256755   31467 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 23:14:59.256763   31467 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 23:14:59.256768   31467 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 23:14:59.256777   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:14:59.256783   31467 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 23:14:59.256793   31467 command_runner.go:130] > # Cgroup setting for conmon
	I0116 23:14:59.256800   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 23:14:59.256806   31467 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 23:14:59.256812   31467 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 23:14:59.256820   31467 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 23:14:59.256826   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:14:59.256832   31467 command_runner.go:130] > conmon_env = [
	I0116 23:14:59.256839   31467 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 23:14:59.256843   31467 command_runner.go:130] > ]
	I0116 23:14:59.256848   31467 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 23:14:59.256855   31467 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 23:14:59.256861   31467 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 23:14:59.256867   31467 command_runner.go:130] > # default_env = [
	I0116 23:14:59.256871   31467 command_runner.go:130] > # ]
	I0116 23:14:59.256879   31467 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 23:14:59.256883   31467 command_runner.go:130] > # selinux = false
	I0116 23:14:59.256892   31467 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 23:14:59.256898   31467 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 23:14:59.256905   31467 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 23:14:59.256909   31467 command_runner.go:130] > # seccomp_profile = ""
	I0116 23:14:59.256917   31467 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 23:14:59.256922   31467 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 23:14:59.256931   31467 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 23:14:59.256935   31467 command_runner.go:130] > # which might increase security.
	I0116 23:14:59.256942   31467 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 23:14:59.256948   31467 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 23:14:59.256956   31467 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 23:14:59.256963   31467 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 23:14:59.256971   31467 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 23:14:59.256976   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:14:59.256983   31467 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 23:14:59.256988   31467 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 23:14:59.256996   31467 command_runner.go:130] > # the cgroup blockio controller.
	I0116 23:14:59.257001   31467 command_runner.go:130] > # blockio_config_file = ""
	I0116 23:14:59.257011   31467 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 23:14:59.257016   31467 command_runner.go:130] > # irqbalance daemon.
	I0116 23:14:59.257024   31467 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 23:14:59.257030   31467 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 23:14:59.257037   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:14:59.257042   31467 command_runner.go:130] > # rdt_config_file = ""
	I0116 23:14:59.257048   31467 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 23:14:59.257052   31467 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 23:14:59.257058   31467 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 23:14:59.257064   31467 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 23:14:59.257071   31467 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 23:14:59.257079   31467 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 23:14:59.257083   31467 command_runner.go:130] > # will be added.
	I0116 23:14:59.257089   31467 command_runner.go:130] > # default_capabilities = [
	I0116 23:14:59.257093   31467 command_runner.go:130] > # 	"CHOWN",
	I0116 23:14:59.257097   31467 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 23:14:59.257106   31467 command_runner.go:130] > # 	"FSETID",
	I0116 23:14:59.257115   31467 command_runner.go:130] > # 	"FOWNER",
	I0116 23:14:59.257122   31467 command_runner.go:130] > # 	"SETGID",
	I0116 23:14:59.257132   31467 command_runner.go:130] > # 	"SETUID",
	I0116 23:14:59.257139   31467 command_runner.go:130] > # 	"SETPCAP",
	I0116 23:14:59.257143   31467 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 23:14:59.257148   31467 command_runner.go:130] > # 	"KILL",
	I0116 23:14:59.257154   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257160   31467 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 23:14:59.257168   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:14:59.257172   31467 command_runner.go:130] > # default_sysctls = [
	I0116 23:14:59.257176   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257183   31467 command_runner.go:130] > # List of devices on the host that a
	I0116 23:14:59.257189   31467 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 23:14:59.257195   31467 command_runner.go:130] > # allowed_devices = [
	I0116 23:14:59.257201   31467 command_runner.go:130] > # 	"/dev/fuse",
	I0116 23:14:59.257212   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257222   31467 command_runner.go:130] > # List of additional devices. specified as
	I0116 23:14:59.257235   31467 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 23:14:59.257243   31467 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 23:14:59.257269   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:14:59.257276   31467 command_runner.go:130] > # additional_devices = [
	I0116 23:14:59.257279   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257285   31467 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 23:14:59.257295   31467 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 23:14:59.257305   31467 command_runner.go:130] > # 	"/etc/cdi",
	I0116 23:14:59.257316   31467 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 23:14:59.257325   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257335   31467 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 23:14:59.257343   31467 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 23:14:59.257349   31467 command_runner.go:130] > # Defaults to false.
	I0116 23:14:59.257355   31467 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 23:14:59.257363   31467 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 23:14:59.257369   31467 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 23:14:59.257378   31467 command_runner.go:130] > # hooks_dir = [
	I0116 23:14:59.257390   31467 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 23:14:59.257403   31467 command_runner.go:130] > # ]
	I0116 23:14:59.257416   31467 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 23:14:59.257430   31467 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 23:14:59.257440   31467 command_runner.go:130] > # its default mounts from the following two files:
	I0116 23:14:59.257446   31467 command_runner.go:130] > #
	I0116 23:14:59.257452   31467 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 23:14:59.257462   31467 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 23:14:59.257474   31467 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 23:14:59.257481   31467 command_runner.go:130] > #
	I0116 23:14:59.257494   31467 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 23:14:59.257507   31467 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 23:14:59.257525   31467 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 23:14:59.257536   31467 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 23:14:59.257541   31467 command_runner.go:130] > #
	I0116 23:14:59.257549   31467 command_runner.go:130] > # default_mounts_file = ""
	I0116 23:14:59.257557   31467 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 23:14:59.257570   31467 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 23:14:59.257580   31467 command_runner.go:130] > pids_limit = 1024
	I0116 23:14:59.257594   31467 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 23:14:59.257608   31467 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 23:14:59.257621   31467 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 23:14:59.257637   31467 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 23:14:59.257647   31467 command_runner.go:130] > # log_size_max = -1
	I0116 23:14:59.257661   31467 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 23:14:59.257671   31467 command_runner.go:130] > # log_to_journald = false
	I0116 23:14:59.257680   31467 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 23:14:59.257690   31467 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 23:14:59.257703   31467 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 23:14:59.257715   31467 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 23:14:59.257727   31467 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 23:14:59.257737   31467 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 23:14:59.257750   31467 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 23:14:59.257760   31467 command_runner.go:130] > # read_only = false
	I0116 23:14:59.257769   31467 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 23:14:59.257781   31467 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 23:14:59.257792   31467 command_runner.go:130] > # live configuration reload.
	I0116 23:14:59.257804   31467 command_runner.go:130] > # log_level = "info"
	I0116 23:14:59.257817   31467 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 23:14:59.257828   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:14:59.257838   31467 command_runner.go:130] > # log_filter = ""
	I0116 23:14:59.257850   31467 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 23:14:59.257860   31467 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 23:14:59.257868   31467 command_runner.go:130] > # separated by comma.
	I0116 23:14:59.257879   31467 command_runner.go:130] > # uid_mappings = ""
	I0116 23:14:59.257892   31467 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 23:14:59.257906   31467 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 23:14:59.257918   31467 command_runner.go:130] > # separated by comma.
	I0116 23:14:59.257929   31467 command_runner.go:130] > # gid_mappings = ""
	I0116 23:14:59.257939   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 23:14:59.257956   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:14:59.257963   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:14:59.257969   31467 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 23:14:59.257979   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 23:14:59.257990   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:14:59.258001   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:14:59.258015   31467 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 23:14:59.258029   31467 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 23:14:59.258042   31467 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 23:14:59.258054   31467 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 23:14:59.258061   31467 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 23:14:59.258069   31467 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 23:14:59.258082   31467 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 23:14:59.258094   31467 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 23:14:59.258106   31467 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 23:14:59.258118   31467 command_runner.go:130] > drop_infra_ctr = false
	I0116 23:14:59.258131   31467 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 23:14:59.258143   31467 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 23:14:59.258158   31467 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 23:14:59.258165   31467 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 23:14:59.258174   31467 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 23:14:59.258186   31467 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 23:14:59.258197   31467 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 23:14:59.258212   31467 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 23:14:59.258222   31467 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 23:14:59.258235   31467 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 23:14:59.258246   31467 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 23:14:59.258257   31467 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 23:14:59.258271   31467 command_runner.go:130] > # default_runtime = "runc"
	I0116 23:14:59.258283   31467 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 23:14:59.258296   31467 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 23:14:59.258313   31467 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 23:14:59.258324   31467 command_runner.go:130] > # creation as a file is not desired either.
	I0116 23:14:59.258355   31467 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 23:14:59.258367   31467 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 23:14:59.258376   31467 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 23:14:59.258385   31467 command_runner.go:130] > # ]
	I0116 23:14:59.258396   31467 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 23:14:59.258410   31467 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 23:14:59.258420   31467 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 23:14:59.258432   31467 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 23:14:59.258438   31467 command_runner.go:130] > #
	I0116 23:14:59.258447   31467 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 23:14:59.258460   31467 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 23:14:59.258470   31467 command_runner.go:130] > #  runtime_type = "oci"
	I0116 23:14:59.258478   31467 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 23:14:59.258489   31467 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 23:14:59.258499   31467 command_runner.go:130] > #  allowed_annotations = []
	I0116 23:14:59.258505   31467 command_runner.go:130] > # Where:
	I0116 23:14:59.258522   31467 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 23:14:59.258532   31467 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 23:14:59.258545   31467 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 23:14:59.258558   31467 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 23:14:59.258567   31467 command_runner.go:130] > #   in $PATH.
	I0116 23:14:59.258577   31467 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 23:14:59.258588   31467 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 23:14:59.258602   31467 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 23:14:59.258611   31467 command_runner.go:130] > #   state.
	I0116 23:14:59.258622   31467 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 23:14:59.258640   31467 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 23:14:59.258653   31467 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 23:14:59.258666   31467 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 23:14:59.258678   31467 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 23:14:59.258689   31467 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 23:14:59.258700   31467 command_runner.go:130] > #   The currently recognized values are:
	I0116 23:14:59.258715   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 23:14:59.258729   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 23:14:59.258742   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 23:14:59.258755   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 23:14:59.258768   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 23:14:59.258775   31467 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 23:14:59.258789   31467 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 23:14:59.258803   31467 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 23:14:59.258815   31467 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 23:14:59.258825   31467 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 23:14:59.258833   31467 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 23:14:59.258842   31467 command_runner.go:130] > runtime_type = "oci"
	I0116 23:14:59.258855   31467 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 23:14:59.258864   31467 command_runner.go:130] > runtime_config_path = ""
	I0116 23:14:59.258872   31467 command_runner.go:130] > monitor_path = ""
	I0116 23:14:59.258883   31467 command_runner.go:130] > monitor_cgroup = ""
	I0116 23:14:59.258894   31467 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 23:14:59.258907   31467 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 23:14:59.258916   31467 command_runner.go:130] > # running containers
	I0116 23:14:59.258927   31467 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 23:14:59.258939   31467 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 23:14:59.258991   31467 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 23:14:59.259009   31467 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 23:14:59.259018   31467 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 23:14:59.259028   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 23:14:59.259035   31467 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 23:14:59.259043   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 23:14:59.259055   31467 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 23:14:59.259066   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 23:14:59.259079   31467 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 23:14:59.259095   31467 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 23:14:59.259108   31467 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 23:14:59.259119   31467 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 23:14:59.259134   31467 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 23:14:59.259148   31467 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 23:14:59.259165   31467 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 23:14:59.259180   31467 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 23:14:59.259193   31467 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 23:14:59.259204   31467 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 23:14:59.259212   31467 command_runner.go:130] > # Example:
	I0116 23:14:59.259221   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 23:14:59.259233   31467 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 23:14:59.259244   31467 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 23:14:59.259256   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 23:14:59.259265   31467 command_runner.go:130] > # cpuset = 0
	I0116 23:14:59.259275   31467 command_runner.go:130] > # cpushares = "0-1"
	I0116 23:14:59.259283   31467 command_runner.go:130] > # Where:
	I0116 23:14:59.259289   31467 command_runner.go:130] > # The workload name is workload-type.
	I0116 23:14:59.259305   31467 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 23:14:59.259318   31467 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 23:14:59.259333   31467 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 23:14:59.259352   31467 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 23:14:59.259365   31467 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 23:14:59.259372   31467 command_runner.go:130] > # 
	I0116 23:14:59.259378   31467 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 23:14:59.259386   31467 command_runner.go:130] > #
	I0116 23:14:59.259397   31467 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 23:14:59.259411   31467 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 23:14:59.259424   31467 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 23:14:59.259439   31467 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 23:14:59.259452   31467 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 23:14:59.259459   31467 command_runner.go:130] > [crio.image]
	I0116 23:14:59.259465   31467 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 23:14:59.259475   31467 command_runner.go:130] > # default_transport = "docker://"
	I0116 23:14:59.259486   31467 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 23:14:59.259500   31467 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:14:59.259515   31467 command_runner.go:130] > # global_auth_file = ""
	I0116 23:14:59.259527   31467 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 23:14:59.259538   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:14:59.259548   31467 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 23:14:59.259562   31467 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 23:14:59.259578   31467 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:14:59.259590   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:14:59.259597   31467 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 23:14:59.259609   31467 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 23:14:59.259622   31467 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 23:14:59.259635   31467 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 23:14:59.259648   31467 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 23:14:59.259659   31467 command_runner.go:130] > # pause_command = "/pause"
	I0116 23:14:59.259671   31467 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 23:14:59.259679   31467 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 23:14:59.259687   31467 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 23:14:59.259694   31467 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 23:14:59.259702   31467 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 23:14:59.259710   31467 command_runner.go:130] > # signature_policy = ""
	I0116 23:14:59.259718   31467 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 23:14:59.259725   31467 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 23:14:59.259733   31467 command_runner.go:130] > # changing them here.
	I0116 23:14:59.259740   31467 command_runner.go:130] > # insecure_registries = [
	I0116 23:14:59.259743   31467 command_runner.go:130] > # ]
	I0116 23:14:59.259755   31467 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 23:14:59.259762   31467 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 23:14:59.259769   31467 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 23:14:59.259774   31467 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 23:14:59.259781   31467 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 23:14:59.259787   31467 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 23:14:59.259793   31467 command_runner.go:130] > # CNI plugins.
	I0116 23:14:59.259797   31467 command_runner.go:130] > [crio.network]
	I0116 23:14:59.259805   31467 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 23:14:59.259810   31467 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 23:14:59.259817   31467 command_runner.go:130] > # cni_default_network = ""
	I0116 23:14:59.259823   31467 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 23:14:59.259833   31467 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 23:14:59.259841   31467 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 23:14:59.259847   31467 command_runner.go:130] > # plugin_dirs = [
	I0116 23:14:59.259852   31467 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 23:14:59.259857   31467 command_runner.go:130] > # ]
	I0116 23:14:59.259863   31467 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 23:14:59.259867   31467 command_runner.go:130] > [crio.metrics]
	I0116 23:14:59.259872   31467 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 23:14:59.259877   31467 command_runner.go:130] > enable_metrics = true
	I0116 23:14:59.259882   31467 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 23:14:59.259887   31467 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 23:14:59.259893   31467 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 23:14:59.259901   31467 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 23:14:59.259907   31467 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 23:14:59.259914   31467 command_runner.go:130] > # metrics_collectors = [
	I0116 23:14:59.259918   31467 command_runner.go:130] > # 	"operations",
	I0116 23:14:59.259925   31467 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 23:14:59.259929   31467 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 23:14:59.259937   31467 command_runner.go:130] > # 	"operations_errors",
	I0116 23:14:59.259944   31467 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 23:14:59.259948   31467 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 23:14:59.259955   31467 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 23:14:59.259959   31467 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 23:14:59.259963   31467 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 23:14:59.259969   31467 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 23:14:59.259973   31467 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 23:14:59.259978   31467 command_runner.go:130] > # 	"containers_oom_total",
	I0116 23:14:59.259982   31467 command_runner.go:130] > # 	"containers_oom",
	I0116 23:14:59.259989   31467 command_runner.go:130] > # 	"processes_defunct",
	I0116 23:14:59.259992   31467 command_runner.go:130] > # 	"operations_total",
	I0116 23:14:59.259997   31467 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 23:14:59.260004   31467 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 23:14:59.260008   31467 command_runner.go:130] > # 	"operations_errors_total",
	I0116 23:14:59.260017   31467 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 23:14:59.260022   31467 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 23:14:59.260029   31467 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 23:14:59.260038   31467 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 23:14:59.260045   31467 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 23:14:59.260049   31467 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 23:14:59.260053   31467 command_runner.go:130] > # ]
	I0116 23:14:59.260058   31467 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 23:14:59.260064   31467 command_runner.go:130] > # metrics_port = 9090
	I0116 23:14:59.260069   31467 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 23:14:59.260075   31467 command_runner.go:130] > # metrics_socket = ""
	I0116 23:14:59.260080   31467 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 23:14:59.260088   31467 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 23:14:59.260095   31467 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 23:14:59.260101   31467 command_runner.go:130] > # certificate on any modification event.
	I0116 23:14:59.260105   31467 command_runner.go:130] > # metrics_cert = ""
	I0116 23:14:59.260111   31467 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 23:14:59.260116   31467 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 23:14:59.260122   31467 command_runner.go:130] > # metrics_key = ""
	I0116 23:14:59.260128   31467 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 23:14:59.260134   31467 command_runner.go:130] > [crio.tracing]
	I0116 23:14:59.260141   31467 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 23:14:59.260148   31467 command_runner.go:130] > # enable_tracing = false
	I0116 23:14:59.260153   31467 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 23:14:59.260162   31467 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 23:14:59.260170   31467 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 23:14:59.260175   31467 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 23:14:59.260181   31467 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 23:14:59.260187   31467 command_runner.go:130] > [crio.stats]
	I0116 23:14:59.260193   31467 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 23:14:59.260201   31467 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 23:14:59.260206   31467 command_runner.go:130] > # stats_collection_period = 0
	I0116 23:14:59.260277   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:14:59.260285   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:14:59.260294   31467 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:14:59.260312   31467 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328490 NodeName:multinode-328490-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:14:59.260409   31467 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328490-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:14:59.260458   31467 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-328490-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:14:59.260503   31467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:14:59.268567   31467 command_runner.go:130] > kubeadm
	I0116 23:14:59.268591   31467 command_runner.go:130] > kubectl
	I0116 23:14:59.268595   31467 command_runner.go:130] > kubelet
	I0116 23:14:59.268843   31467 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:14:59.268895   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 23:14:59.276528   31467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0116 23:14:59.291369   31467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:14:59.305808   31467 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 23:14:59.309082   31467 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I0116 23:14:59.309247   31467 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:14:59.309530   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:14:59.309584   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:14:59.309621   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:14:59.324671   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I0116 23:14:59.325130   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:14:59.325547   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:14:59.325564   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:14:59.325853   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:14:59.326071   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:14:59.326193   31467 start.go:304] JoinCluster: &{Name:multinode-328490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:14:59.326363   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 23:14:59.326387   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:14:59.328828   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:14:59.329191   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:14:59.329209   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:14:59.329359   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:14:59.329529   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:14:59.329640   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:14:59.329749   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:14:59.501175   31467 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token swr7fe.i8mfrpdicql7bnmz --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:14:59.501221   31467 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 23:14:59.501253   31467 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:14:59.501691   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:14:59.501747   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:14:59.515739   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0116 23:14:59.516237   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:14:59.516759   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:14:59.516794   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:14:59.517143   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:14:59.517395   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:14:59.517649   31467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-328490-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 23:14:59.517675   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:14:59.520876   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:14:59.521323   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:14:59.521343   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:14:59.521519   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:14:59.521736   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:14:59.521902   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:14:59.522045   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:14:59.716460   31467 command_runner.go:130] > node/multinode-328490-m02 cordoned
	I0116 23:15:02.758617   31467 command_runner.go:130] > pod "busybox-5b5d89c9d6-dcshd" has DeletionTimestamp older than 1 seconds, skipping
	I0116 23:15:02.758647   31467 command_runner.go:130] > node/multinode-328490-m02 drained
	I0116 23:15:02.760456   31467 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 23:15:02.760491   31467 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-d8kbq, kube-system/kube-proxy-bqt7h
	I0116 23:15:02.760527   31467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-328490-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.242850103s)
	I0116 23:15:02.760550   31467 node.go:108] successfully drained node "m02"
	I0116 23:15:02.760886   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:15:02.761106   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:15:02.761457   31467 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 23:15:02.761541   31467 round_trippers.go:463] DELETE https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:02.761552   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:02.761562   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:02.761572   31467 round_trippers.go:473]     Content-Type: application/json
	I0116 23:15:02.761585   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:02.774380   31467 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0116 23:15:02.774408   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:02.774420   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:02.774429   31467 round_trippers.go:580]     Content-Length: 171
	I0116 23:15:02.774440   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:02 GMT
	I0116 23:15:02.774448   31467 round_trippers.go:580]     Audit-Id: d0b8931e-e970-406c-8582-96e4574dc222
	I0116 23:15:02.774457   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:02.774470   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:02.774483   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:02.774518   31467 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-328490-m02","kind":"nodes","uid":"37500630-512c-4fdd-b9d7-a7a751761f39"}}
	I0116 23:15:02.774556   31467 node.go:124] successfully deleted node "m02"
	I0116 23:15:02.774569   31467 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 23:15:02.774597   31467 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 23:15:02.774620   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token swr7fe.i8mfrpdicql7bnmz --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-328490-m02"
	I0116 23:15:02.823556   31467 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 23:15:02.993166   31467 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 23:15:02.993196   31467 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 23:15:03.060715   31467 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:15:03.060745   31467 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:15:03.060837   31467 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 23:15:03.222759   31467 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 23:15:03.747156   31467 command_runner.go:130] > This node has joined the cluster:
	I0116 23:15:03.747178   31467 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 23:15:03.747185   31467 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 23:15:03.747192   31467 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 23:15:03.750147   31467 command_runner.go:130] ! W0116 23:15:02.811828    2669 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 23:15:03.750177   31467 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 23:15:03.750190   31467 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 23:15:03.750202   31467 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 23:15:03.750231   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 23:15:04.048507   31467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=multinode-328490 minikube.k8s.io/updated_at=2024_01_16T23_15_04_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:15:04.142015   31467 command_runner.go:130] > node/multinode-328490-m02 labeled
	I0116 23:15:04.155340   31467 command_runner.go:130] > node/multinode-328490-m03 labeled
	I0116 23:15:04.158953   31467 start.go:306] JoinCluster complete in 4.832757853s
	I0116 23:15:04.158977   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:15:04.158983   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:15:04.159037   31467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 23:15:04.165687   31467 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 23:15:04.165721   31467 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 23:15:04.165732   31467 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 23:15:04.165742   31467 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:15:04.165751   31467 command_runner.go:130] > Access: 2024-01-16 23:12:33.865314209 +0000
	I0116 23:15:04.165759   31467 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 23:15:04.165769   31467 command_runner.go:130] > Change: 2024-01-16 23:12:32.165314209 +0000
	I0116 23:15:04.165774   31467 command_runner.go:130] >  Birth: -
	I0116 23:15:04.165831   31467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 23:15:04.165846   31467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 23:15:04.187572   31467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 23:15:04.503253   31467 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:15:04.507089   31467 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:15:04.512315   31467 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 23:15:04.522262   31467 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 23:15:04.525602   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:15:04.525845   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:15:04.526173   31467 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 23:15:04.526188   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.526198   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.526213   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.529501   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:15:04.529517   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.529523   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.529529   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.529534   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.529539   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.529544   31467 round_trippers.go:580]     Content-Length: 291
	I0116 23:15:04.529552   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.529563   31467 round_trippers.go:580]     Audit-Id: 5ca7c5de-fa35-4e70-9417-41e62e8ecc2e
	I0116 23:15:04.529586   31467 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9e31c201-6ba7-47ab-b7c2-74a96553d8c6","resourceVersion":"898","creationTimestamp":"2024-01-16T23:01:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 23:15:04.529667   31467 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-328490" context rescaled to 1 replicas
	I0116 23:15:04.529694   31467 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 23:15:04.531931   31467 out.go:177] * Verifying Kubernetes components...
	I0116 23:15:04.533493   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:15:04.548854   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:15:04.549113   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:15:04.549353   31467 node_ready.go:35] waiting up to 6m0s for node "multinode-328490-m02" to be "Ready" ...
	I0116 23:15:04.549439   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:04.549449   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.549460   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.549472   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.552745   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:15:04.552763   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.552769   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.552775   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.552782   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.552790   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.552801   31467 round_trippers.go:580]     Audit-Id: 3e4e1469-d7f2-43d3-919e-66ad1ec4f090
	I0116 23:15:04.552813   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.553416   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"5474a74a-845b-4b16-acc7-38a34a48e2ab","resourceVersion":"1046","creationTimestamp":"2024-01-16T23:15:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_15_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:15:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 23:15:04.553717   31467 node_ready.go:49] node "multinode-328490-m02" has status "Ready":"True"
	I0116 23:15:04.553740   31467 node_ready.go:38] duration metric: took 4.368153ms waiting for node "multinode-328490-m02" to be "Ready" ...
	I0116 23:15:04.553758   31467 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:15:04.553827   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:15:04.553836   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.553846   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.553859   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.560135   31467 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 23:15:04.560156   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.560166   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.560175   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.560182   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.560190   31467 round_trippers.go:580]     Audit-Id: d01f96c6-8f72-42c9-94bd-aa2d12fe4701
	I0116 23:15:04.560199   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.560208   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.561651   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1053"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82206 chars]
	I0116 23:15:04.564105   31467 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.564180   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:15:04.564188   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.564195   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.564201   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.566179   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.566196   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.566205   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.566213   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.566221   31467 round_trippers.go:580]     Audit-Id: a937fc19-9314-4f87-b1d5-186fdeb63a3d
	I0116 23:15:04.566232   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.566245   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.566257   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.566441   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 23:15:04.566989   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.567008   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.567019   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.567030   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.569058   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:04.569074   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.569080   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.569085   31467 round_trippers.go:580]     Audit-Id: fa50410a-d558-4251-8b87-c42e0209cdbc
	I0116 23:15:04.569090   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.569095   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.569099   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.569105   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.569249   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:04.569532   31467 pod_ready.go:92] pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:04.569546   31467 pod_ready.go:81] duration metric: took 5.419979ms waiting for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.569554   31467 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.569598   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:15:04.569606   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.569613   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.569619   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.571603   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.571623   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.571634   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.571643   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.571650   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.571658   31467 round_trippers.go:580]     Audit-Id: 68e18243-64ba-4667-92a1-b6df1252eae3
	I0116 23:15:04.571663   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.571671   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.571873   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"887","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 23:15:04.572167   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.572178   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.572186   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.572192   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.573863   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.573878   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.573884   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.573889   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.573894   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.573901   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.573906   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.573911   31467 round_trippers.go:580]     Audit-Id: 4ecac615-ba7a-4dfb-955a-4e9ec8cdd664
	I0116 23:15:04.574050   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:04.574289   31467 pod_ready.go:92] pod "etcd-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:04.574301   31467 pod_ready.go:81] duration metric: took 4.741184ms waiting for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.574314   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.574374   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:15:04.574382   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.574389   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.574394   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.576118   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.576131   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.576136   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.576142   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.576147   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.576155   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.576160   31467 round_trippers.go:580]     Audit-Id: 190001db-de1a-4481-ac48-262d81cc9e99
	I0116 23:15:04.576165   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.576293   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"900","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 23:15:04.576613   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.576623   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.576630   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.576635   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.578411   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.578426   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.578432   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.578437   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.578442   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.578447   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.578452   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.578456   31467 round_trippers.go:580]     Audit-Id: bcf8baa0-d87a-4aec-80e3-146e7199e767
	I0116 23:15:04.578607   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:04.578906   31467 pod_ready.go:92] pod "kube-apiserver-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:04.578920   31467 pod_ready.go:81] duration metric: took 4.600568ms waiting for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.578928   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.578973   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:15:04.578982   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.578989   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.578995   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.580721   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.580735   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.580741   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.580746   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.580750   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.580756   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.580761   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.580768   31467 round_trippers.go:580]     Audit-Id: 3ee650d1-c3d1-4e76-9298-39f00345f9b2
	I0116 23:15:04.581057   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"901","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 23:15:04.581355   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.581365   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.581372   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.581377   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.583078   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:15:04.583091   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.583098   31467 round_trippers.go:580]     Audit-Id: 74a0bb84-da40-4fae-94f0-1b66eaee67f8
	I0116 23:15:04.583103   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.583109   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.583116   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.583121   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.583127   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.583253   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:04.583500   31467 pod_ready.go:92] pod "kube-controller-manager-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:04.583513   31467 pod_ready.go:81] duration metric: took 4.579922ms waiting for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.583520   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.749903   31467 request.go:629] Waited for 166.323645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:15:04.749979   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:15:04.749984   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.749992   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.749998   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.755103   31467 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 23:15:04.755128   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.755135   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.755140   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.755146   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.755151   31467 round_trippers.go:580]     Audit-Id: b0e56079-917b-4fb6-8ce0-0347ea1ae319
	I0116 23:15:04.755157   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.755163   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.755352   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vmdk","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba882fac-57b9-4e3a-afc5-09f016f542bf","resourceVersion":"860","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 23:15:04.950201   31467 request.go:629] Waited for 194.380648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.950258   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:04.950263   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:04.950270   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:04.950276   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:04.953263   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:04.953282   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:04.953288   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:04.953296   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:04.953304   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:04 GMT
	I0116 23:15:04.953319   31467 round_trippers.go:580]     Audit-Id: b2c8f578-090a-4eb3-a60f-3328b815a10a
	I0116 23:15:04.953328   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:04.953338   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:04.953501   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:04.953845   31467 pod_ready.go:92] pod "kube-proxy-6vmdk" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:04.953862   31467 pod_ready.go:81] duration metric: took 370.336576ms waiting for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:04.953871   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:05.149902   31467 request.go:629] Waited for 195.964315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:15:05.149976   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:15:05.149986   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:05.149997   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:05.150010   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:05.152751   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:05.152774   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:05.152784   31467 round_trippers.go:580]     Audit-Id: 866f498c-c3ae-40d0-97ef-55ef8bf7935a
	I0116 23:15:05.152791   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:05.152799   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:05.152806   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:05.152814   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:05.152821   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:05 GMT
	I0116 23:15:05.153210   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqt7h","generateName":"kube-proxy-","namespace":"kube-system","uid":"8903f17c-7460-4896-826d-76d99335348d","resourceVersion":"1050","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0116 23:15:05.350054   31467 request.go:629] Waited for 196.369144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:05.350123   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:05.350130   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:05.350137   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:05.350144   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:05.352866   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:05.352890   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:05.352900   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:05 GMT
	I0116 23:15:05.352909   31467 round_trippers.go:580]     Audit-Id: 2919e047-9650-4f2d-b237-0840ad0b6ef5
	I0116 23:15:05.352918   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:05.352926   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:05.352933   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:05.352940   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:05.353207   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"5474a74a-845b-4b16-acc7-38a34a48e2ab","resourceVersion":"1046","creationTimestamp":"2024-01-16T23:15:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_15_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:15:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 23:15:05.549829   31467 request.go:629] Waited for 95.30038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:15:05.549887   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:15:05.549892   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:05.549901   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:05.549907   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:05.553107   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:15:05.553132   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:05.553152   31467 round_trippers.go:580]     Audit-Id: 54a363ae-f08c-4c08-85e3-50ccb784abb2
	I0116 23:15:05.553161   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:05.553168   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:05.553175   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:05.553183   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:05.553191   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:05 GMT
	I0116 23:15:05.553397   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqt7h","generateName":"kube-proxy-","namespace":"kube-system","uid":"8903f17c-7460-4896-826d-76d99335348d","resourceVersion":"1062","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0116 23:15:05.750264   31467 request.go:629] Waited for 196.363242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:05.750325   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:15:05.750330   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:05.750350   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:05.750356   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:05.753127   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:05.753161   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:05.753171   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:05.753179   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:05.753188   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:05 GMT
	I0116 23:15:05.753196   31467 round_trippers.go:580]     Audit-Id: 78539586-5475-4c4f-bf02-fc7c06064510
	I0116 23:15:05.753202   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:05.753213   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:05.753373   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"5474a74a-845b-4b16-acc7-38a34a48e2ab","resourceVersion":"1046","creationTimestamp":"2024-01-16T23:15:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_15_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:15:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 23:15:05.753667   31467 pod_ready.go:92] pod "kube-proxy-bqt7h" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:05.753684   31467 pod_ready.go:81] duration metric: took 799.80716ms waiting for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:05.753694   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:05.950130   31467 request.go:629] Waited for 196.367467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:15:05.950215   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:15:05.950225   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:05.950233   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:05.950252   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:05.954017   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:15:05.954041   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:05.954051   31467 round_trippers.go:580]     Audit-Id: 1ea1cc2e-ae6a-4e0c-923a-1fe560144ee0
	I0116 23:15:05.954059   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:05.954067   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:05.954075   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:05.954085   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:05.954095   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:05 GMT
	I0116 23:15:05.954306   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tc46j","generateName":"kube-proxy-","namespace":"kube-system","uid":"57831696-d514-4547-9f95-59ea41569c65","resourceVersion":"727","creationTimestamp":"2024-01-16T23:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 23:15:06.150132   31467 request.go:629] Waited for 195.376854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:15:06.150208   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:15:06.150213   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:06.150222   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:06.150228   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:06.152866   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:06.152894   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:06.152904   31467 round_trippers.go:580]     Audit-Id: 5a947b47-4c04-4fc8-9fbd-4d5781dbdaef
	I0116 23:15:06.152912   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:06.152919   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:06.152925   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:06.152932   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:06.152939   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:06 GMT
	I0116 23:15:06.153157   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m03","uid":"f19a8ad4-4a7f-4648-b320-7d48cffd62df","resourceVersion":"1047","creationTimestamp":"2024-01-16T23:05:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_15_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:05:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0116 23:15:06.153446   31467 pod_ready.go:92] pod "kube-proxy-tc46j" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:06.153463   31467 pod_ready.go:81] duration metric: took 399.763022ms waiting for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:06.153482   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:06.349525   31467 request.go:629] Waited for 195.965092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:15:06.349591   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:15:06.349598   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:06.349608   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:06.349614   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:06.352292   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:06.352310   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:06.352317   31467 round_trippers.go:580]     Audit-Id: 10b8ef6d-0666-49e6-a462-34cd26162bb2
	I0116 23:15:06.352323   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:06.352328   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:06.352332   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:06.352338   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:06.352343   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:06 GMT
	I0116 23:15:06.352705   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-328490","namespace":"kube-system","uid":"0f132072-d49d-46ed-a25f-526a38a74885","resourceVersion":"893","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.mirror":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.seen":"2024-01-16T23:01:56.235892116Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 23:15:06.550498   31467 request.go:629] Waited for 197.382013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:06.550569   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:15:06.550574   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:06.550581   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:06.550589   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:06.553326   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:15:06.553348   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:06.553359   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:06.553369   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:06.553379   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:06 GMT
	I0116 23:15:06.553387   31467 round_trippers.go:580]     Audit-Id: bc923ff1-6420-41ee-b69d-eede1899b2cc
	I0116 23:15:06.553404   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:06.553412   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:06.553608   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:15:06.554033   31467 pod_ready.go:92] pod "kube-scheduler-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:15:06.554056   31467 pod_ready.go:81] duration metric: took 400.564713ms waiting for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:15:06.554067   31467 pod_ready.go:38] duration metric: took 2.000294342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:15:06.554080   31467 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:15:06.554132   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:15:06.567594   31467 system_svc.go:56] duration metric: took 13.505688ms WaitForService to wait for kubelet.
	I0116 23:15:06.567621   31467 kubeadm.go:581] duration metric: took 2.037904705s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:15:06.567652   31467 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:15:06.750105   31467 request.go:629] Waited for 182.385753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 23:15:06.750161   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 23:15:06.750168   31467 round_trippers.go:469] Request Headers:
	I0116 23:15:06.750178   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:15:06.750187   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:15:06.756280   31467 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 23:15:06.756306   31467 round_trippers.go:577] Response Headers:
	I0116 23:15:06.756317   31467 round_trippers.go:580]     Audit-Id: c937ec1a-56e1-4192-8218-4b407f52f638
	I0116 23:15:06.756326   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:15:06.756337   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:15:06.756348   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:15:06.756360   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:15:06.756370   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:15:06 GMT
	I0116 23:15:06.756589   31467 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1064"},"items":[{"metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16210 chars]
	I0116 23:15:06.757380   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:15:06.757413   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:15:06.757425   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:15:06.757435   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:15:06.757441   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:15:06.757450   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:15:06.757457   31467 node_conditions.go:105] duration metric: took 189.799356ms to run NodePressure ...
	I0116 23:15:06.757473   31467 start.go:228] waiting for startup goroutines ...
	I0116 23:15:06.757499   31467 start.go:242] writing updated cluster config ...
	I0116 23:15:06.758081   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:15:06.758207   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:15:06.761599   31467 out.go:177] * Starting worker node multinode-328490-m03 in cluster multinode-328490
	I0116 23:15:06.763119   31467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:15:06.763145   31467 cache.go:56] Caching tarball of preloaded images
	I0116 23:15:06.763251   31467 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:15:06.763265   31467 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:15:06.763364   31467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/config.json ...
	I0116 23:15:06.763583   31467 start.go:365] acquiring machines lock for multinode-328490-m03: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:15:06.763627   31467 start.go:369] acquired machines lock for "multinode-328490-m03" in 24.368µs
	I0116 23:15:06.763638   31467 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:15:06.763643   31467 fix.go:54] fixHost starting: m03
	I0116 23:15:06.763916   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:15:06.763954   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:15:06.778510   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0116 23:15:06.778919   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:15:06.779331   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:15:06.779351   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:15:06.779633   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:15:06.779801   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:15:06.779930   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetState
	I0116 23:15:06.781325   31467 fix.go:102] recreateIfNeeded on multinode-328490-m03: state=Running err=<nil>
	W0116 23:15:06.781338   31467 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:15:06.783844   31467 out.go:177] * Updating the running kvm2 "multinode-328490-m03" VM ...
	I0116 23:15:06.785228   31467 machine.go:88] provisioning docker machine ...
	I0116 23:15:06.785243   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:15:06.785436   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetMachineName
	I0116 23:15:06.785610   31467 buildroot.go:166] provisioning hostname "multinode-328490-m03"
	I0116 23:15:06.785626   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetMachineName
	I0116 23:15:06.785771   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:15:06.788023   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:06.788365   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:06.788393   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:06.788549   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:15:06.788716   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:06.788831   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:06.788977   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:15:06.789147   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:15:06.789517   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0116 23:15:06.789542   31467 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328490-m03 && echo "multinode-328490-m03" | sudo tee /etc/hostname
	I0116 23:15:06.932383   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328490-m03
	
	I0116 23:15:06.932415   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:15:06.935149   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:06.935486   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:06.935513   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:06.935681   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:15:06.935850   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:06.935991   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:06.936097   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:15:06.936226   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:15:06.936528   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0116 23:15:06.936546   31467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328490-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328490-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328490-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:15:07.062856   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:15:07.062888   31467 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:15:07.062908   31467 buildroot.go:174] setting up certificates
	I0116 23:15:07.062920   31467 provision.go:83] configureAuth start
	I0116 23:15:07.062933   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetMachineName
	I0116 23:15:07.063216   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetIP
	I0116 23:15:07.066044   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.066459   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:07.066491   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.066621   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:15:07.068897   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.069279   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:07.069310   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.069436   31467 provision.go:138] copyHostCerts
	I0116 23:15:07.069459   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:15:07.069485   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:15:07.069494   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:15:07.069562   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:15:07.069626   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:15:07.069644   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:15:07.069650   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:15:07.069676   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:15:07.069737   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:15:07.069761   31467 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:15:07.069772   31467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:15:07.069812   31467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:15:07.069877   31467 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.multinode-328490-m03 san=[192.168.39.157 192.168.39.157 localhost 127.0.0.1 minikube multinode-328490-m03]
	I0116 23:15:07.391690   31467 provision.go:172] copyRemoteCerts
	I0116 23:15:07.391762   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:15:07.391792   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:15:07.394226   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.394588   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:07.394615   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.394800   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:15:07.394998   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:07.395193   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:15:07.395310   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m03/id_rsa Username:docker}
	I0116 23:15:07.487329   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 23:15:07.487394   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:15:07.510064   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 23:15:07.510124   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 23:15:07.533577   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 23:15:07.533640   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:15:07.558554   31467 provision.go:86] duration metric: configureAuth took 495.620793ms
	I0116 23:15:07.558582   31467 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:15:07.558786   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:15:07.558869   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:15:07.561594   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.561992   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:15:07.562034   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:15:07.562173   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:15:07.562379   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:07.562566   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:15:07.562724   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:15:07.562898   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:15:07.563267   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0116 23:15:07.563284   31467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:16:38.248564   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:16:38.248593   31467 machine.go:91] provisioned docker machine in 1m31.463354128s
	I0116 23:16:38.248604   31467 start.go:300] post-start starting for "multinode-328490-m03" (driver="kvm2")
	I0116 23:16:38.248615   31467 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:16:38.248633   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:16:38.248982   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:16:38.249014   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:16:38.251794   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.252145   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:38.252177   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.252387   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:16:38.252587   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:16:38.252774   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:16:38.252925   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m03/id_rsa Username:docker}
	I0116 23:16:38.348384   31467 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:16:38.352498   31467 command_runner.go:130] > NAME=Buildroot
	I0116 23:16:38.352532   31467 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 23:16:38.352540   31467 command_runner.go:130] > ID=buildroot
	I0116 23:16:38.352550   31467 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 23:16:38.352558   31467 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 23:16:38.352594   31467 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:16:38.352608   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:16:38.352687   31467 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:16:38.352787   31467 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:16:38.352801   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /etc/ssl/certs/149302.pem
	I0116 23:16:38.352907   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:16:38.362432   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:16:38.384071   31467 start.go:303] post-start completed in 135.455154ms
	I0116 23:16:38.384092   31467 fix.go:56] fixHost completed within 1m31.620447989s
	I0116 23:16:38.384112   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:16:38.386771   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.387119   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:38.387161   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.387312   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:16:38.387524   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:16:38.387688   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:16:38.387875   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:16:38.388044   31467 main.go:141] libmachine: Using SSH client type: native
	I0116 23:16:38.388352   31467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0116 23:16:38.388364   31467 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:16:38.515080   31467 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705446998.504781372
	
	I0116 23:16:38.515109   31467 fix.go:206] guest clock: 1705446998.504781372
	I0116 23:16:38.515121   31467 fix.go:219] Guest: 2024-01-16 23:16:38.504781372 +0000 UTC Remote: 2024-01-16 23:16:38.384095694 +0000 UTC m=+554.539415822 (delta=120.685678ms)
	I0116 23:16:38.515141   31467 fix.go:190] guest clock delta is within tolerance: 120.685678ms
	I0116 23:16:38.515147   31467 start.go:83] releasing machines lock for "multinode-328490-m03", held for 1m31.751512991s
	I0116 23:16:38.515171   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:16:38.515476   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetIP
	I0116 23:16:38.518082   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.518411   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:38.518445   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.520417   31467 out.go:177] * Found network options:
	I0116 23:16:38.521770   31467 out.go:177]   - NO_PROXY=192.168.39.50,192.168.39.152
	W0116 23:16:38.522944   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 23:16:38.522964   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 23:16:38.522976   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:16:38.523456   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:16:38.523625   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .DriverName
	I0116 23:16:38.523715   31467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0116 23:16:38.523794   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 23:16:38.523818   31467 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 23:16:38.523826   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:16:38.523885   31467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:16:38.523909   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHHostname
	I0116 23:16:38.526210   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.526407   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.526599   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:38.526631   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.526757   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:38.526763   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:16:38.526785   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:38.526928   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHPort
	I0116 23:16:38.526946   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:16:38.527104   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:16:38.527120   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHKeyPath
	I0116 23:16:38.527290   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetSSHUsername
	I0116 23:16:38.527286   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m03/id_rsa Username:docker}
	I0116 23:16:38.527417   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m03/id_rsa Username:docker}
	I0116 23:16:38.759247   31467 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 23:16:38.759287   31467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 23:16:38.764538   31467 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 23:16:38.764720   31467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:16:38.764770   31467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:16:38.772954   31467 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 23:16:38.772984   31467 start.go:475] detecting cgroup driver to use...
	I0116 23:16:38.773037   31467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:16:38.785786   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:16:38.798055   31467 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:16:38.798106   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:16:38.810688   31467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:16:38.822763   31467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:16:38.953771   31467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:16:39.087694   31467 docker.go:233] disabling docker service ...
	I0116 23:16:39.087773   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:16:39.102653   31467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:16:39.115377   31467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:16:39.258919   31467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:16:39.439084   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:16:39.456335   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:16:39.473303   31467 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 23:16:39.473350   31467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:16:39.473407   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:16:39.482482   31467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:16:39.482548   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:16:39.491623   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:16:39.500219   31467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:16:39.509179   31467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:16:39.518078   31467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:16:39.525674   31467 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 23:16:39.525742   31467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:16:39.536959   31467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:16:39.660510   31467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:16:42.378146   31467 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.717598639s)
	I0116 23:16:42.378174   31467 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:16:42.378220   31467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:16:42.382513   31467 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 23:16:42.382538   31467 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 23:16:42.382549   31467 command_runner.go:130] > Device: 16h/22d	Inode: 1219        Links: 1
	I0116 23:16:42.382558   31467 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:16:42.382565   31467 command_runner.go:130] > Access: 2024-01-16 23:16:42.284328634 +0000
	I0116 23:16:42.382574   31467 command_runner.go:130] > Modify: 2024-01-16 23:16:42.284328634 +0000
	I0116 23:16:42.382583   31467 command_runner.go:130] > Change: 2024-01-16 23:16:42.284328634 +0000
	I0116 23:16:42.382592   31467 command_runner.go:130] >  Birth: -
	I0116 23:16:42.382652   31467 start.go:543] Will wait 60s for crictl version
	I0116 23:16:42.382697   31467 ssh_runner.go:195] Run: which crictl
	I0116 23:16:42.386059   31467 command_runner.go:130] > /usr/bin/crictl
	I0116 23:16:42.386127   31467 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:16:42.421452   31467 command_runner.go:130] > Version:  0.1.0
	I0116 23:16:42.421490   31467 command_runner.go:130] > RuntimeName:  cri-o
	I0116 23:16:42.421495   31467 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 23:16:42.421501   31467 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 23:16:42.422576   31467 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:16:42.422640   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:16:42.463134   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:16:42.463156   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:16:42.463164   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:16:42.463168   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:16:42.463174   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:16:42.463184   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:16:42.463189   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:16:42.463193   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:16:42.463200   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:16:42.463211   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:16:42.463217   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:16:42.463253   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:16:42.464443   31467 ssh_runner.go:195] Run: crio --version
	I0116 23:16:42.509218   31467 command_runner.go:130] > crio version 1.24.1
	I0116 23:16:42.509242   31467 command_runner.go:130] > Version:          1.24.1
	I0116 23:16:42.509258   31467 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 23:16:42.509270   31467 command_runner.go:130] > GitTreeState:     dirty
	I0116 23:16:42.509279   31467 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 23:16:42.509296   31467 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 23:16:42.509303   31467 command_runner.go:130] > Compiler:         gc
	I0116 23:16:42.509313   31467 command_runner.go:130] > Platform:         linux/amd64
	I0116 23:16:42.509324   31467 command_runner.go:130] > Linkmode:         dynamic
	I0116 23:16:42.509337   31467 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 23:16:42.509347   31467 command_runner.go:130] > SeccompEnabled:   true
	I0116 23:16:42.509355   31467 command_runner.go:130] > AppArmorEnabled:  false
	I0116 23:16:42.512612   31467 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:16:42.514273   31467 out.go:177]   - env NO_PROXY=192.168.39.50
	I0116 23:16:42.515667   31467 out.go:177]   - env NO_PROXY=192.168.39.50,192.168.39.152
	I0116 23:16:42.517078   31467 main.go:141] libmachine: (multinode-328490-m03) Calling .GetIP
	I0116 23:16:42.519781   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:42.520124   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:a2:20", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:04:14 +0000 UTC Type:0 Mac:52:54:00:25:a2:20 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-328490-m03 Clientid:01:52:54:00:25:a2:20}
	I0116 23:16:42.520142   31467 main.go:141] libmachine: (multinode-328490-m03) DBG | domain multinode-328490-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:25:a2:20 in network mk-multinode-328490
	I0116 23:16:42.520338   31467 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:16:42.524300   31467 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 23:16:42.524352   31467 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490 for IP: 192.168.39.157
	I0116 23:16:42.524381   31467 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:16:42.524515   31467 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:16:42.524551   31467 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:16:42.524569   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 23:16:42.524583   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 23:16:42.524595   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 23:16:42.524607   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 23:16:42.524655   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:16:42.524681   31467 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:16:42.524691   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:16:42.524716   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:16:42.524739   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:16:42.524767   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:16:42.524821   31467 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:16:42.524858   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem -> /usr/share/ca-certificates/14930.pem
	I0116 23:16:42.524877   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> /usr/share/ca-certificates/149302.pem
	I0116 23:16:42.524896   31467 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:16:42.525285   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:16:42.548546   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:16:42.569286   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:16:42.590252   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:16:42.612028   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:16:42.635545   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:16:42.657406   31467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:16:42.679343   31467 ssh_runner.go:195] Run: openssl version
	I0116 23:16:42.684552   31467 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 23:16:42.684611   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:16:42.694687   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:16:42.698717   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:16:42.698840   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:16:42.698893   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:16:42.703677   31467 command_runner.go:130] > 51391683
	I0116 23:16:42.704087   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:16:42.711717   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:16:42.720612   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:16:42.724495   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:16:42.724787   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:16:42.724821   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:16:42.729833   31467 command_runner.go:130] > 3ec20f2e
	I0116 23:16:42.729874   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:16:42.737575   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:16:42.746647   31467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:16:42.750648   31467 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:16:42.750781   31467 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:16:42.750831   31467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:16:42.756589   31467 command_runner.go:130] > b5213941
	I0116 23:16:42.756854   31467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:16:42.764969   31467 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:16:42.768648   31467 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 23:16:42.768827   31467 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 23:16:42.768894   31467 ssh_runner.go:195] Run: crio config
	I0116 23:16:42.823890   31467 command_runner.go:130] ! time="2024-01-16 23:16:42.813723165Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 23:16:42.823916   31467 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 23:16:42.830258   31467 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 23:16:42.830277   31467 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 23:16:42.830284   31467 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 23:16:42.830287   31467 command_runner.go:130] > #
	I0116 23:16:42.830295   31467 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 23:16:42.830305   31467 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 23:16:42.830315   31467 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 23:16:42.830327   31467 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 23:16:42.830345   31467 command_runner.go:130] > # reload'.
	I0116 23:16:42.830356   31467 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 23:16:42.830365   31467 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 23:16:42.830383   31467 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 23:16:42.830392   31467 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 23:16:42.830399   31467 command_runner.go:130] > [crio]
	I0116 23:16:42.830408   31467 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 23:16:42.830422   31467 command_runner.go:130] > # containers images, in this directory.
	I0116 23:16:42.830429   31467 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 23:16:42.830439   31467 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 23:16:42.830444   31467 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 23:16:42.830450   31467 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 23:16:42.830459   31467 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 23:16:42.830465   31467 command_runner.go:130] > storage_driver = "overlay"
	I0116 23:16:42.830472   31467 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 23:16:42.830478   31467 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 23:16:42.830482   31467 command_runner.go:130] > storage_option = [
	I0116 23:16:42.830486   31467 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 23:16:42.830490   31467 command_runner.go:130] > ]
	I0116 23:16:42.830496   31467 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 23:16:42.830507   31467 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 23:16:42.830511   31467 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 23:16:42.830517   31467 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 23:16:42.830523   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 23:16:42.830528   31467 command_runner.go:130] > # always happen on a node reboot
	I0116 23:16:42.830533   31467 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 23:16:42.830540   31467 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 23:16:42.830547   31467 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 23:16:42.830557   31467 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 23:16:42.830565   31467 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 23:16:42.830572   31467 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 23:16:42.830580   31467 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 23:16:42.830585   31467 command_runner.go:130] > # internal_wipe = true
	I0116 23:16:42.830593   31467 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 23:16:42.830599   31467 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 23:16:42.830607   31467 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 23:16:42.830615   31467 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 23:16:42.830621   31467 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 23:16:42.830627   31467 command_runner.go:130] > [crio.api]
	I0116 23:16:42.830633   31467 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 23:16:42.830640   31467 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 23:16:42.830645   31467 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 23:16:42.830652   31467 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 23:16:42.830658   31467 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 23:16:42.830666   31467 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 23:16:42.830673   31467 command_runner.go:130] > # stream_port = "0"
	I0116 23:16:42.830679   31467 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 23:16:42.830685   31467 command_runner.go:130] > # stream_enable_tls = false
	I0116 23:16:42.830691   31467 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 23:16:42.830698   31467 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 23:16:42.830704   31467 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 23:16:42.830712   31467 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 23:16:42.830718   31467 command_runner.go:130] > # minutes.
	I0116 23:16:42.830722   31467 command_runner.go:130] > # stream_tls_cert = ""
	I0116 23:16:42.830730   31467 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 23:16:42.830738   31467 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 23:16:42.830743   31467 command_runner.go:130] > # stream_tls_key = ""
	I0116 23:16:42.830749   31467 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 23:16:42.830757   31467 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 23:16:42.830762   31467 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 23:16:42.830768   31467 command_runner.go:130] > # stream_tls_ca = ""
	I0116 23:16:42.830775   31467 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:16:42.830782   31467 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 23:16:42.830789   31467 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 23:16:42.830796   31467 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 23:16:42.830818   31467 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 23:16:42.830826   31467 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 23:16:42.830832   31467 command_runner.go:130] > [crio.runtime]
	I0116 23:16:42.830840   31467 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 23:16:42.830846   31467 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 23:16:42.830851   31467 command_runner.go:130] > # "nofile=1024:2048"
	I0116 23:16:42.830856   31467 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 23:16:42.830862   31467 command_runner.go:130] > # default_ulimits = [
	I0116 23:16:42.830866   31467 command_runner.go:130] > # ]
	I0116 23:16:42.830872   31467 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 23:16:42.830879   31467 command_runner.go:130] > # no_pivot = false
	I0116 23:16:42.830885   31467 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 23:16:42.830891   31467 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 23:16:42.830899   31467 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 23:16:42.830904   31467 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 23:16:42.830911   31467 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 23:16:42.830918   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:16:42.830924   31467 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 23:16:42.830929   31467 command_runner.go:130] > # Cgroup setting for conmon
	I0116 23:16:42.830938   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 23:16:42.830944   31467 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 23:16:42.830951   31467 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 23:16:42.830958   31467 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 23:16:42.830964   31467 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 23:16:42.830970   31467 command_runner.go:130] > conmon_env = [
	I0116 23:16:42.830976   31467 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 23:16:42.830979   31467 command_runner.go:130] > ]
	I0116 23:16:42.830984   31467 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 23:16:42.830992   31467 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 23:16:42.831000   31467 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 23:16:42.831006   31467 command_runner.go:130] > # default_env = [
	I0116 23:16:42.831010   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831018   31467 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 23:16:42.831024   31467 command_runner.go:130] > # selinux = false
	I0116 23:16:42.831031   31467 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 23:16:42.831040   31467 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 23:16:42.831049   31467 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 23:16:42.831056   31467 command_runner.go:130] > # seccomp_profile = ""
	I0116 23:16:42.831061   31467 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 23:16:42.831069   31467 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 23:16:42.831080   31467 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 23:16:42.831088   31467 command_runner.go:130] > # which might increase security.
	I0116 23:16:42.831099   31467 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 23:16:42.831111   31467 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 23:16:42.831124   31467 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 23:16:42.831137   31467 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 23:16:42.831150   31467 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 23:16:42.831161   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:16:42.831171   31467 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 23:16:42.831186   31467 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 23:16:42.831197   31467 command_runner.go:130] > # the cgroup blockio controller.
	I0116 23:16:42.831206   31467 command_runner.go:130] > # blockio_config_file = ""
	I0116 23:16:42.831219   31467 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 23:16:42.831230   31467 command_runner.go:130] > # irqbalance daemon.
	I0116 23:16:42.831242   31467 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 23:16:42.831255   31467 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 23:16:42.831266   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:16:42.831276   31467 command_runner.go:130] > # rdt_config_file = ""
	I0116 23:16:42.831284   31467 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 23:16:42.831291   31467 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 23:16:42.831297   31467 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 23:16:42.831304   31467 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 23:16:42.831310   31467 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 23:16:42.831318   31467 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 23:16:42.831324   31467 command_runner.go:130] > # will be added.
	I0116 23:16:42.831329   31467 command_runner.go:130] > # default_capabilities = [
	I0116 23:16:42.831335   31467 command_runner.go:130] > # 	"CHOWN",
	I0116 23:16:42.831339   31467 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 23:16:42.831345   31467 command_runner.go:130] > # 	"FSETID",
	I0116 23:16:42.831349   31467 command_runner.go:130] > # 	"FOWNER",
	I0116 23:16:42.831355   31467 command_runner.go:130] > # 	"SETGID",
	I0116 23:16:42.831361   31467 command_runner.go:130] > # 	"SETUID",
	I0116 23:16:42.831366   31467 command_runner.go:130] > # 	"SETPCAP",
	I0116 23:16:42.831371   31467 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 23:16:42.831377   31467 command_runner.go:130] > # 	"KILL",
	I0116 23:16:42.831380   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831389   31467 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 23:16:42.831397   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:16:42.831404   31467 command_runner.go:130] > # default_sysctls = [
	I0116 23:16:42.831407   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831414   31467 command_runner.go:130] > # List of devices on the host that a
	I0116 23:16:42.831420   31467 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 23:16:42.831426   31467 command_runner.go:130] > # allowed_devices = [
	I0116 23:16:42.831430   31467 command_runner.go:130] > # 	"/dev/fuse",
	I0116 23:16:42.831434   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831440   31467 command_runner.go:130] > # List of additional devices. specified as
	I0116 23:16:42.831449   31467 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 23:16:42.831457   31467 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 23:16:42.831473   31467 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 23:16:42.831480   31467 command_runner.go:130] > # additional_devices = [
	I0116 23:16:42.831484   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831491   31467 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 23:16:42.831497   31467 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 23:16:42.831501   31467 command_runner.go:130] > # 	"/etc/cdi",
	I0116 23:16:42.831507   31467 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 23:16:42.831511   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831519   31467 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 23:16:42.831527   31467 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 23:16:42.831535   31467 command_runner.go:130] > # Defaults to false.
	I0116 23:16:42.831542   31467 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 23:16:42.831548   31467 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 23:16:42.831556   31467 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 23:16:42.831562   31467 command_runner.go:130] > # hooks_dir = [
	I0116 23:16:42.831566   31467 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 23:16:42.831572   31467 command_runner.go:130] > # ]
	I0116 23:16:42.831579   31467 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 23:16:42.831587   31467 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 23:16:42.831593   31467 command_runner.go:130] > # its default mounts from the following two files:
	I0116 23:16:42.831598   31467 command_runner.go:130] > #
	I0116 23:16:42.831605   31467 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 23:16:42.831613   31467 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 23:16:42.831620   31467 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 23:16:42.831627   31467 command_runner.go:130] > #
	I0116 23:16:42.831633   31467 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 23:16:42.831642   31467 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 23:16:42.831650   31467 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 23:16:42.831658   31467 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 23:16:42.831661   31467 command_runner.go:130] > #
	I0116 23:16:42.831668   31467 command_runner.go:130] > # default_mounts_file = ""
	I0116 23:16:42.831673   31467 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 23:16:42.831681   31467 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 23:16:42.831687   31467 command_runner.go:130] > pids_limit = 1024
	I0116 23:16:42.831696   31467 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 23:16:42.831708   31467 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 23:16:42.831721   31467 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 23:16:42.831737   31467 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 23:16:42.831746   31467 command_runner.go:130] > # log_size_max = -1
	I0116 23:16:42.831759   31467 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 23:16:42.831768   31467 command_runner.go:130] > # log_to_journald = false
	I0116 23:16:42.831777   31467 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 23:16:42.831789   31467 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 23:16:42.831800   31467 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 23:16:42.831811   31467 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 23:16:42.831820   31467 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 23:16:42.831829   31467 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 23:16:42.831838   31467 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 23:16:42.831848   31467 command_runner.go:130] > # read_only = false
	I0116 23:16:42.831857   31467 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 23:16:42.831870   31467 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 23:16:42.831879   31467 command_runner.go:130] > # live configuration reload.
	I0116 23:16:42.831885   31467 command_runner.go:130] > # log_level = "info"
	I0116 23:16:42.831898   31467 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 23:16:42.831909   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:16:42.831920   31467 command_runner.go:130] > # log_filter = ""
	I0116 23:16:42.831933   31467 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 23:16:42.831946   31467 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 23:16:42.831956   31467 command_runner.go:130] > # separated by comma.
	I0116 23:16:42.831962   31467 command_runner.go:130] > # uid_mappings = ""
	I0116 23:16:42.831975   31467 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 23:16:42.831985   31467 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 23:16:42.831994   31467 command_runner.go:130] > # separated by comma.
	I0116 23:16:42.832004   31467 command_runner.go:130] > # gid_mappings = ""
	I0116 23:16:42.832017   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 23:16:42.832029   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:16:42.832042   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:16:42.832052   31467 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 23:16:42.832062   31467 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 23:16:42.832075   31467 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 23:16:42.832087   31467 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 23:16:42.832097   31467 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 23:16:42.832110   31467 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 23:16:42.832122   31467 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 23:16:42.832135   31467 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 23:16:42.832144   31467 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 23:16:42.832157   31467 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 23:16:42.832169   31467 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 23:16:42.832188   31467 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 23:16:42.832197   31467 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 23:16:42.832203   31467 command_runner.go:130] > drop_infra_ctr = false
	I0116 23:16:42.832212   31467 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 23:16:42.832220   31467 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 23:16:42.832229   31467 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 23:16:42.832236   31467 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 23:16:42.832242   31467 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 23:16:42.832249   31467 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 23:16:42.832253   31467 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 23:16:42.832262   31467 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 23:16:42.832269   31467 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 23:16:42.832275   31467 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 23:16:42.832285   31467 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 23:16:42.832295   31467 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 23:16:42.832301   31467 command_runner.go:130] > # default_runtime = "runc"
	I0116 23:16:42.832307   31467 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 23:16:42.832316   31467 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 23:16:42.832327   31467 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 23:16:42.832335   31467 command_runner.go:130] > # creation as a file is not desired either.
	I0116 23:16:42.832343   31467 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 23:16:42.832351   31467 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 23:16:42.832355   31467 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 23:16:42.832361   31467 command_runner.go:130] > # ]
	I0116 23:16:42.832367   31467 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 23:16:42.832378   31467 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 23:16:42.832387   31467 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 23:16:42.832395   31467 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 23:16:42.832401   31467 command_runner.go:130] > #
	I0116 23:16:42.832406   31467 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 23:16:42.832413   31467 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 23:16:42.832417   31467 command_runner.go:130] > #  runtime_type = "oci"
	I0116 23:16:42.832424   31467 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 23:16:42.832429   31467 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 23:16:42.832435   31467 command_runner.go:130] > #  allowed_annotations = []
	I0116 23:16:42.832439   31467 command_runner.go:130] > # Where:
	I0116 23:16:42.832447   31467 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 23:16:42.832456   31467 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 23:16:42.832464   31467 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 23:16:42.832472   31467 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 23:16:42.832478   31467 command_runner.go:130] > #   in $PATH.
	I0116 23:16:42.832485   31467 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 23:16:42.832491   31467 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 23:16:42.832497   31467 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 23:16:42.832503   31467 command_runner.go:130] > #   state.
	I0116 23:16:42.832510   31467 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 23:16:42.832518   31467 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 23:16:42.832524   31467 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 23:16:42.832532   31467 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 23:16:42.832545   31467 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 23:16:42.832556   31467 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 23:16:42.832564   31467 command_runner.go:130] > #   The currently recognized values are:
	I0116 23:16:42.832572   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 23:16:42.832581   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 23:16:42.832589   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 23:16:42.832597   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 23:16:42.832607   31467 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 23:16:42.832615   31467 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 23:16:42.832624   31467 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 23:16:42.832632   31467 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 23:16:42.832640   31467 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 23:16:42.832647   31467 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 23:16:42.832651   31467 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 23:16:42.832655   31467 command_runner.go:130] > runtime_type = "oci"
	I0116 23:16:42.832662   31467 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 23:16:42.832666   31467 command_runner.go:130] > runtime_config_path = ""
	I0116 23:16:42.832672   31467 command_runner.go:130] > monitor_path = ""
	I0116 23:16:42.832676   31467 command_runner.go:130] > monitor_cgroup = ""
	I0116 23:16:42.832683   31467 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 23:16:42.832689   31467 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 23:16:42.832695   31467 command_runner.go:130] > # running containers
	I0116 23:16:42.832700   31467 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 23:16:42.832708   31467 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 23:16:42.832739   31467 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 23:16:42.832747   31467 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 23:16:42.832752   31467 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 23:16:42.832759   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 23:16:42.832764   31467 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 23:16:42.832770   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 23:16:42.832776   31467 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 23:16:42.832782   31467 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 23:16:42.832788   31467 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 23:16:42.832796   31467 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 23:16:42.832805   31467 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 23:16:42.832814   31467 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 23:16:42.832825   31467 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 23:16:42.832833   31467 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 23:16:42.832842   31467 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 23:16:42.832851   31467 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 23:16:42.832857   31467 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 23:16:42.832866   31467 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 23:16:42.832870   31467 command_runner.go:130] > # Example:
	I0116 23:16:42.832875   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 23:16:42.832882   31467 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 23:16:42.832886   31467 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 23:16:42.832894   31467 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 23:16:42.832898   31467 command_runner.go:130] > # cpuset = 0
	I0116 23:16:42.832904   31467 command_runner.go:130] > # cpushares = "0-1"
	I0116 23:16:42.832907   31467 command_runner.go:130] > # Where:
	I0116 23:16:42.832912   31467 command_runner.go:130] > # The workload name is workload-type.
	I0116 23:16:42.832920   31467 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 23:16:42.832926   31467 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 23:16:42.832932   31467 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 23:16:42.832940   31467 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 23:16:42.832948   31467 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 23:16:42.832952   31467 command_runner.go:130] > # 
	I0116 23:16:42.832961   31467 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 23:16:42.832964   31467 command_runner.go:130] > #
	I0116 23:16:42.832971   31467 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 23:16:42.832977   31467 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 23:16:42.832985   31467 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 23:16:42.832994   31467 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 23:16:42.833001   31467 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 23:16:42.833006   31467 command_runner.go:130] > [crio.image]
	I0116 23:16:42.833012   31467 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 23:16:42.833019   31467 command_runner.go:130] > # default_transport = "docker://"
	I0116 23:16:42.833025   31467 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 23:16:42.833033   31467 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:16:42.833039   31467 command_runner.go:130] > # global_auth_file = ""
	I0116 23:16:42.833044   31467 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 23:16:42.833052   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:16:42.833057   31467 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 23:16:42.833066   31467 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 23:16:42.833074   31467 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 23:16:42.833081   31467 command_runner.go:130] > # This option supports live configuration reload.
	I0116 23:16:42.833085   31467 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 23:16:42.833097   31467 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 23:16:42.833109   31467 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 23:16:42.833122   31467 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 23:16:42.833133   31467 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 23:16:42.833143   31467 command_runner.go:130] > # pause_command = "/pause"
	I0116 23:16:42.833156   31467 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 23:16:42.833169   31467 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 23:16:42.833187   31467 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 23:16:42.833200   31467 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 23:16:42.833214   31467 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 23:16:42.833223   31467 command_runner.go:130] > # signature_policy = ""
	I0116 23:16:42.833236   31467 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 23:16:42.833245   31467 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 23:16:42.833252   31467 command_runner.go:130] > # changing them here.
	I0116 23:16:42.833257   31467 command_runner.go:130] > # insecure_registries = [
	I0116 23:16:42.833263   31467 command_runner.go:130] > # ]
	I0116 23:16:42.833270   31467 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 23:16:42.833277   31467 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 23:16:42.833284   31467 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 23:16:42.833290   31467 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 23:16:42.833296   31467 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 23:16:42.833303   31467 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 23:16:42.833309   31467 command_runner.go:130] > # CNI plugins.
	I0116 23:16:42.833313   31467 command_runner.go:130] > [crio.network]
	I0116 23:16:42.833322   31467 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 23:16:42.833329   31467 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 23:16:42.833336   31467 command_runner.go:130] > # cni_default_network = ""
	I0116 23:16:42.833342   31467 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 23:16:42.833347   31467 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 23:16:42.833355   31467 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 23:16:42.833359   31467 command_runner.go:130] > # plugin_dirs = [
	I0116 23:16:42.833366   31467 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 23:16:42.833371   31467 command_runner.go:130] > # ]
	I0116 23:16:42.833383   31467 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 23:16:42.833393   31467 command_runner.go:130] > [crio.metrics]
	I0116 23:16:42.833404   31467 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 23:16:42.833413   31467 command_runner.go:130] > enable_metrics = true
	I0116 23:16:42.833424   31467 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 23:16:42.833435   31467 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 23:16:42.833447   31467 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 23:16:42.833460   31467 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 23:16:42.833484   31467 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 23:16:42.833495   31467 command_runner.go:130] > # metrics_collectors = [
	I0116 23:16:42.833505   31467 command_runner.go:130] > # 	"operations",
	I0116 23:16:42.833513   31467 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 23:16:42.833518   31467 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 23:16:42.833524   31467 command_runner.go:130] > # 	"operations_errors",
	I0116 23:16:42.833528   31467 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 23:16:42.833535   31467 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 23:16:42.833540   31467 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 23:16:42.833547   31467 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 23:16:42.833551   31467 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 23:16:42.833558   31467 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 23:16:42.833562   31467 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 23:16:42.833569   31467 command_runner.go:130] > # 	"containers_oom_total",
	I0116 23:16:42.833574   31467 command_runner.go:130] > # 	"containers_oom",
	I0116 23:16:42.833580   31467 command_runner.go:130] > # 	"processes_defunct",
	I0116 23:16:42.833584   31467 command_runner.go:130] > # 	"operations_total",
	I0116 23:16:42.833592   31467 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 23:16:42.833599   31467 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 23:16:42.833604   31467 command_runner.go:130] > # 	"operations_errors_total",
	I0116 23:16:42.833610   31467 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 23:16:42.833615   31467 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 23:16:42.833622   31467 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 23:16:42.833626   31467 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 23:16:42.833633   31467 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 23:16:42.833638   31467 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 23:16:42.833644   31467 command_runner.go:130] > # ]
	I0116 23:16:42.833650   31467 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 23:16:42.833656   31467 command_runner.go:130] > # metrics_port = 9090
	I0116 23:16:42.833661   31467 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 23:16:42.833668   31467 command_runner.go:130] > # metrics_socket = ""
	I0116 23:16:42.833673   31467 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 23:16:42.833681   31467 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 23:16:42.833690   31467 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 23:16:42.833697   31467 command_runner.go:130] > # certificate on any modification event.
	I0116 23:16:42.833701   31467 command_runner.go:130] > # metrics_cert = ""
	I0116 23:16:42.833708   31467 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 23:16:42.833716   31467 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 23:16:42.833720   31467 command_runner.go:130] > # metrics_key = ""
	I0116 23:16:42.833728   31467 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 23:16:42.833732   31467 command_runner.go:130] > [crio.tracing]
	I0116 23:16:42.833739   31467 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 23:16:42.833746   31467 command_runner.go:130] > # enable_tracing = false
	I0116 23:16:42.833751   31467 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 23:16:42.833758   31467 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 23:16:42.833764   31467 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 23:16:42.833770   31467 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 23:16:42.833776   31467 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 23:16:42.833783   31467 command_runner.go:130] > [crio.stats]
	I0116 23:16:42.833789   31467 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 23:16:42.833796   31467 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 23:16:42.833803   31467 command_runner.go:130] > # stats_collection_period = 0
	I0116 23:16:42.833864   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:16:42.833873   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:16:42.833882   31467 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:16:42.833899   31467 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.157 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328490 NodeName:multinode-328490-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:16:42.834000   31467 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328490-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:16:42.834046   31467 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-328490-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:16:42.834092   31467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:16:42.842078   31467 command_runner.go:130] > kubeadm
	I0116 23:16:42.842101   31467 command_runner.go:130] > kubectl
	I0116 23:16:42.842111   31467 command_runner.go:130] > kubelet
	I0116 23:16:42.842180   31467 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:16:42.842241   31467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 23:16:42.849812   31467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0116 23:16:42.865777   31467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:16:42.879795   31467 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0116 23:16:42.883045   31467 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I0116 23:16:42.883199   31467 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:16:42.883428   31467 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:16:42.883553   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:16:42.883598   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:16:42.898988   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
	I0116 23:16:42.899430   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:16:42.899895   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:16:42.899918   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:16:42.900232   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:16:42.900448   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:16:42.900602   31467 start.go:304] JoinCluster: &{Name:multinode-328490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-328490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:16:42.900830   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 23:16:42.900856   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:16:42.903652   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:16:42.904096   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:16:42.904122   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:16:42.904284   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:16:42.904447   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:16:42.904607   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:16:42.904730   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:16:43.079347   31467 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token beherx.h736i6niegzi1uw3 --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:16:43.082948   31467 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 23:16:43.082994   31467 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:16:43.083307   31467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:16:43.083346   31467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:16:43.097786   31467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0116 23:16:43.098220   31467 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:16:43.098690   31467 main.go:141] libmachine: Using API Version  1
	I0116 23:16:43.098713   31467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:16:43.098967   31467 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:16:43.099164   31467 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:16:43.099347   31467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-328490-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 23:16:43.099365   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:16:43.101931   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:16:43.102363   31467 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:12:33 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:16:43.102390   31467 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:16:43.102522   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:16:43.102718   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:16:43.102851   31467 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:16:43.102969   31467 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:16:43.294072   31467 command_runner.go:130] > node/multinode-328490-m03 cordoned
	I0116 23:16:46.333181   31467 command_runner.go:130] > pod "busybox-5b5d89c9d6-w4m44" has DeletionTimestamp older than 1 seconds, skipping
	I0116 23:16:46.333205   31467 command_runner.go:130] > node/multinode-328490-m03 drained
	I0116 23:16:46.335337   31467 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 23:16:46.335363   31467 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-ngl9m, kube-system/kube-proxy-tc46j
	I0116 23:16:46.335390   31467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-328490-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.236018542s)
	I0116 23:16:46.335406   31467 node.go:108] successfully drained node "m03"
	I0116 23:16:46.335816   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:16:46.336145   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:16:46.336548   31467 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 23:16:46.336611   31467 round_trippers.go:463] DELETE https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:16:46.336620   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:46.336635   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:46.336645   31467 round_trippers.go:473]     Content-Type: application/json
	I0116 23:16:46.336657   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:46.348582   31467 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0116 23:16:46.348600   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:46.348609   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:46.348618   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:46.348625   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:46.348633   31467 round_trippers.go:580]     Content-Length: 171
	I0116 23:16:46.348641   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:46 GMT
	I0116 23:16:46.348650   31467 round_trippers.go:580]     Audit-Id: 76f4cc17-e1eb-4ba4-bcdf-e26912dd33fd
	I0116 23:16:46.348657   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:46.348730   31467 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-328490-m03","kind":"nodes","uid":"f19a8ad4-4a7f-4648-b320-7d48cffd62df"}}
	I0116 23:16:46.348784   31467 node.go:124] successfully deleted node "m03"
	I0116 23:16:46.348797   31467 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 23:16:46.348824   31467 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 23:16:46.348852   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token beherx.h736i6niegzi1uw3 --discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-328490-m03"
	I0116 23:16:46.399220   31467 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 23:16:46.550952   31467 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 23:16:46.550989   31467 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 23:16:46.610317   31467 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:16:46.610525   31467 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:16:46.610675   31467 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 23:16:46.739534   31467 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 23:16:47.266914   31467 command_runner.go:130] > This node has joined the cluster:
	I0116 23:16:47.266946   31467 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 23:16:47.266956   31467 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 23:16:47.266966   31467 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 23:16:47.269416   31467 command_runner.go:130] ! W0116 23:16:46.388881    2355 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 23:16:47.269441   31467 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 23:16:47.269453   31467 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 23:16:47.269467   31467 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 23:16:47.269596   31467 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 23:16:47.530092   31467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=multinode-328490 minikube.k8s.io/updated_at=2024_01_16T23_16_47_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:16:47.637033   31467 command_runner.go:130] > node/multinode-328490-m02 labeled
	I0116 23:16:47.653312   31467 command_runner.go:130] > node/multinode-328490-m03 labeled
	I0116 23:16:47.655347   31467 start.go:306] JoinCluster complete in 4.754744128s
	I0116 23:16:47.655372   31467 cni.go:84] Creating CNI manager for ""
	I0116 23:16:47.655379   31467 cni.go:136] 3 nodes found, recommending kindnet
	I0116 23:16:47.655438   31467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 23:16:47.661830   31467 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 23:16:47.661854   31467 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 23:16:47.661864   31467 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 23:16:47.661875   31467 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 23:16:47.661884   31467 command_runner.go:130] > Access: 2024-01-16 23:12:33.865314209 +0000
	I0116 23:16:47.661894   31467 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 23:16:47.661902   31467 command_runner.go:130] > Change: 2024-01-16 23:12:32.165314209 +0000
	I0116 23:16:47.661911   31467 command_runner.go:130] >  Birth: -
	I0116 23:16:47.662048   31467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 23:16:47.662065   31467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 23:16:47.681780   31467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 23:16:48.004055   31467 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:16:48.007836   31467 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 23:16:48.010295   31467 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 23:16:48.023125   31467 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 23:16:48.025882   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:16:48.026077   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:16:48.026353   31467 round_trippers.go:463] GET https://192.168.39.50:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 23:16:48.026363   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.026370   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.026376   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.028456   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.028474   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.028484   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.028493   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.028501   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.028506   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.028512   31467 round_trippers.go:580]     Content-Length: 291
	I0116 23:16:48.028520   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.028529   31467 round_trippers.go:580]     Audit-Id: e26ed1c6-a8a8-4281-93a6-4b426e27e867
	I0116 23:16:48.028551   31467 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9e31c201-6ba7-47ab-b7c2-74a96553d8c6","resourceVersion":"898","creationTimestamp":"2024-01-16T23:01:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 23:16:48.028638   31467 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-328490" context rescaled to 1 replicas
	I0116 23:16:48.028670   31467 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.157 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 23:16:48.031089   31467 out.go:177] * Verifying Kubernetes components...
	I0116 23:16:48.032197   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:16:48.045106   31467 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:16:48.045317   31467 kapi.go:59] client config for multinode-328490: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.crt", KeyFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/profiles/multinode-328490/client.key", CAFile:"/home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c27cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 23:16:48.045522   31467 node_ready.go:35] waiting up to 6m0s for node "multinode-328490-m03" to be "Ready" ...
	I0116 23:16:48.045579   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:16:48.045587   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.045594   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.045599   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.047770   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.047786   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.047792   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.047797   31467 round_trippers.go:580]     Audit-Id: d41543f9-80a5-406e-bacc-ed7b87c34721
	I0116 23:16:48.047802   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.047807   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.047812   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.047817   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.047966   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m03","uid":"51cf95aa-bfea-4f01-8b01-03fadc8341d1","resourceVersion":"1228","creationTimestamp":"2024-01-16T23:16:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_16_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:16:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4223 chars]
	I0116 23:16:48.048261   31467 node_ready.go:49] node "multinode-328490-m03" has status "Ready":"True"
	I0116 23:16:48.048277   31467 node_ready.go:38] duration metric: took 2.741141ms waiting for node "multinode-328490-m03" to be "Ready" ...
	I0116 23:16:48.048285   31467 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:16:48.048337   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods
	I0116 23:16:48.048344   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.048351   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.048358   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.052685   31467 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 23:16:48.052705   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.052715   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.052724   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.052733   31467 round_trippers.go:580]     Audit-Id: 505381f4-51b5-4e25-ab86-cbdd33713ff3
	I0116 23:16:48.052741   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.052753   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.052764   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.054029   31467 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1234"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82047 chars]
	I0116 23:16:48.056349   31467 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.056419   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7lcpl
	I0116 23:16:48.056428   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.056435   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.056441   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.058375   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.058389   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.058395   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.058400   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.058406   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.058414   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.058421   31467 round_trippers.go:580]     Audit-Id: b9bd02e7-1458-41be-8795-cfdc83007135
	I0116 23:16:48.058430   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.058903   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7lcpl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2c5cd6ef-7b39-48aa-b234-13dda7343591","resourceVersion":"878","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"25daaf03-8792-4ae8-bcc0-c64c5c84607c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25daaf03-8792-4ae8-bcc0-c64c5c84607c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 23:16:48.059334   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.059348   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.059355   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.059361   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.061166   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.061182   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.061191   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.061198   31467 round_trippers.go:580]     Audit-Id: 6fb6050b-34bf-4098-bac3-1ece7de0b8fd
	I0116 23:16:48.061207   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.061215   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.061223   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.061229   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.061643   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:48.061908   31467 pod_ready.go:92] pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.061922   31467 pod_ready.go:81] duration metric: took 5.554629ms waiting for pod "coredns-5dd5756b68-7lcpl" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.061929   31467 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.061968   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-328490
	I0116 23:16:48.061975   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.061982   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.061988   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.064156   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.064169   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.064174   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.064179   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.064184   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.064189   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.064195   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.064203   31467 round_trippers.go:580]     Audit-Id: d1328e55-79d8-4c11-b2f4-3a86096ad5e4
	I0116 23:16:48.064656   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-328490","namespace":"kube-system","uid":"92c91283-c595-4eb5-af56-913835c6c778","resourceVersion":"887","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.50:2379","kubernetes.io/config.hash":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.mirror":"135b9ab02260669aa70754e50c2f9d65","kubernetes.io/config.seen":"2024-01-16T23:01:56.235896391Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 23:16:48.065018   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.065040   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.065050   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.065061   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.066709   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.066728   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.066738   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.066746   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.066754   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.066762   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.066770   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.066782   31467 round_trippers.go:580]     Audit-Id: 7aa2b6cb-61e2-45c3-9862-f9f9d354a061
	I0116 23:16:48.066903   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:48.067149   31467 pod_ready.go:92] pod "etcd-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.067160   31467 pod_ready.go:81] duration metric: took 5.226541ms waiting for pod "etcd-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.067174   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.067213   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-328490
	I0116 23:16:48.067220   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.067226   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.067232   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.068916   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.068927   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.068932   31467 round_trippers.go:580]     Audit-Id: dea7fcae-b0e8-40da-bcc4-e050153ca20a
	I0116 23:16:48.068945   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.068951   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.068958   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.068963   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.068968   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.069083   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-328490","namespace":"kube-system","uid":"4deddb28-05c8-440a-8c76-f45eaa7c42d9","resourceVersion":"900","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.50:8443","kubernetes.io/config.hash":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.mirror":"8fca46a478051a968c54a441a292fd23","kubernetes.io/config.seen":"2024-01-16T23:01:56.235897532Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 23:16:48.069387   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.069397   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.069403   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.069409   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.070928   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.070941   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.070947   31467 round_trippers.go:580]     Audit-Id: 451d8795-c165-45aa-b47a-e014e5677d63
	I0116 23:16:48.070952   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.070957   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.070970   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.070981   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.070993   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.071135   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:48.071401   31467 pod_ready.go:92] pod "kube-apiserver-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.071414   31467 pod_ready.go:81] duration metric: took 4.235048ms waiting for pod "kube-apiserver-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.071421   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.071472   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-328490
	I0116 23:16:48.071481   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.071487   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.071493   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.073124   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.073137   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.073146   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.073155   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.073167   31467 round_trippers.go:580]     Audit-Id: eb30712f-ecc6-434b-8528-d66c9533d801
	I0116 23:16:48.073182   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.073191   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.073203   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.073315   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-328490","namespace":"kube-system","uid":"46b93b7c-b6f2-4ef9-9cb9-395a154034b0","resourceVersion":"901","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.mirror":"02da3be7eefbbafb24bd659d19d0a46d","kubernetes.io/config.seen":"2024-01-16T23:01:56.235898432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 23:16:48.073659   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.073671   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.073677   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.073686   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.075274   31467 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 23:16:48.075287   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.075296   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.075305   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.075315   31467 round_trippers.go:580]     Audit-Id: 6389b258-d8d7-452a-b76c-9bc6c0214005
	I0116 23:16:48.075327   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.075339   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.075352   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.076088   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:48.076332   31467 pod_ready.go:92] pod "kube-controller-manager-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.076345   31467 pod_ready.go:81] duration metric: took 4.917193ms waiting for pod "kube-controller-manager-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.076358   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.245743   31467 request.go:629] Waited for 169.330292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:16:48.245821   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vmdk
	I0116 23:16:48.245826   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.245839   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.245849   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.248353   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.248377   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.248387   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.248393   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.248402   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.248409   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.248417   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.248432   31467 round_trippers.go:580]     Audit-Id: 42602e9b-915b-44aa-9000-d33d20cd4fb9
	I0116 23:16:48.248549   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vmdk","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba882fac-57b9-4e3a-afc5-09f016f542bf","resourceVersion":"860","creationTimestamp":"2024-01-16T23:02:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:02:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 23:16:48.446397   31467 request.go:629] Waited for 197.357352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.446458   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:48.446463   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.446470   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.446487   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.450426   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:16:48.450448   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.450458   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.450466   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.450475   31467 round_trippers.go:580]     Audit-Id: b14ceecb-853f-432c-9de2-72b4fc524a7d
	I0116 23:16:48.450488   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.450500   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.450512   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.450894   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:48.451198   31467 pod_ready.go:92] pod "kube-proxy-6vmdk" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.451212   31467 pod_ready.go:81] duration metric: took 374.846549ms waiting for pod "kube-proxy-6vmdk" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.451221   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.646455   31467 request.go:629] Waited for 195.154783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:16:48.646522   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqt7h
	I0116 23:16:48.646530   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.646542   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.646552   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.648965   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.648994   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.649004   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.649011   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.649019   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.649027   31467 round_trippers.go:580]     Audit-Id: 3744bccb-a907-4d90-9e5a-38ad84c43ae3
	I0116 23:16:48.649033   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.649041   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.649175   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqt7h","generateName":"kube-proxy-","namespace":"kube-system","uid":"8903f17c-7460-4896-826d-76d99335348d","resourceVersion":"1062","creationTimestamp":"2024-01-16T23:03:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:03:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0116 23:16:48.845843   31467 request.go:629] Waited for 196.240193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:16:48.845931   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m02
	I0116 23:16:48.845938   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:48.845949   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:48.845966   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:48.848957   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:48.848981   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:48.848988   31467 round_trippers.go:580]     Audit-Id: e6f38458-f084-41bf-b926-9d5059cd4f0a
	I0116 23:16:48.848994   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:48.848999   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:48.849004   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:48.849009   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:48.849014   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:48 GMT
	I0116 23:16:48.849703   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m02","uid":"5474a74a-845b-4b16-acc7-38a34a48e2ab","resourceVersion":"1227","creationTimestamp":"2024-01-16T23:15:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_16_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:15:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0116 23:16:48.849975   31467 pod_ready.go:92] pod "kube-proxy-bqt7h" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:48.849992   31467 pod_ready.go:81] duration metric: took 398.764748ms waiting for pod "kube-proxy-bqt7h" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:48.850001   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:49.045903   31467 request.go:629] Waited for 195.815593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:16:49.045970   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tc46j
	I0116 23:16:49.045975   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:49.045983   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:49.045989   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:49.049590   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:16:49.049609   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:49.049616   31467 round_trippers.go:580]     Audit-Id: dcf0e51f-5e45-4e7d-b75f-fb67f49d4a7e
	I0116 23:16:49.049621   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:49.049627   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:49.049631   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:49.049637   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:49.049642   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:49 GMT
	I0116 23:16:49.049870   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tc46j","generateName":"kube-proxy-","namespace":"kube-system","uid":"57831696-d514-4547-9f95-59ea41569c65","resourceVersion":"1242","creationTimestamp":"2024-01-16T23:04:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0d525f2f-977b-421e-a354-7aba8cd54a33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:04:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d525f2f-977b-421e-a354-7aba8cd54a33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0116 23:16:49.245660   31467 request.go:629] Waited for 195.405708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:16:49.245732   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490-m03
	I0116 23:16:49.245737   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:49.245745   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:49.245750   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:49.248156   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:49.248177   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:49.248184   31467 round_trippers.go:580]     Audit-Id: 139ee004-95d0-49b2-887b-8f621119c872
	I0116 23:16:49.248189   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:49.248194   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:49.248199   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:49.248204   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:49.248209   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:49 GMT
	I0116 23:16:49.248327   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490-m03","uid":"51cf95aa-bfea-4f01-8b01-03fadc8341d1","resourceVersion":"1228","creationTimestamp":"2024-01-16T23:16:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T23_16_47_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:16:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4223 chars]
	I0116 23:16:49.248621   31467 pod_ready.go:92] pod "kube-proxy-tc46j" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:49.248635   31467 pod_ready.go:81] duration metric: took 398.625587ms waiting for pod "kube-proxy-tc46j" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:49.248644   31467 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:49.446213   31467 request.go:629] Waited for 197.514921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:16:49.446267   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-328490
	I0116 23:16:49.446272   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:49.446279   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:49.446285   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:49.450945   31467 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 23:16:49.450970   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:49.450978   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:49.450983   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:49.450988   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:49.450993   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:49 GMT
	I0116 23:16:49.450999   31467 round_trippers.go:580]     Audit-Id: 4452cf9b-b0ae-4f58-8c2c-cf77ea447af5
	I0116 23:16:49.451004   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:49.451092   31467 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-328490","namespace":"kube-system","uid":"0f132072-d49d-46ed-a25f-526a38a74885","resourceVersion":"893","creationTimestamp":"2024-01-16T23:01:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.mirror":"f2d187ff6e878e54bc7813dae6e0b674","kubernetes.io/config.seen":"2024-01-16T23:01:56.235892116Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T23:01:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 23:16:49.646692   31467 request.go:629] Waited for 195.261855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:49.646751   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes/multinode-328490
	I0116 23:16:49.646756   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:49.646763   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:49.646770   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:49.649483   31467 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 23:16:49.649503   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:49.649512   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:49 GMT
	I0116 23:16:49.649521   31467 round_trippers.go:580]     Audit-Id: e1fe412c-08aa-4ac4-843a-f1703407913b
	I0116 23:16:49.649529   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:49.649537   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:49.649547   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:49.649556   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:49.649720   31467 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T23:01:52Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 23:16:49.650022   31467 pod_ready.go:92] pod "kube-scheduler-multinode-328490" in "kube-system" namespace has status "Ready":"True"
	I0116 23:16:49.650039   31467 pod_ready.go:81] duration metric: took 401.388407ms waiting for pod "kube-scheduler-multinode-328490" in "kube-system" namespace to be "Ready" ...
	I0116 23:16:49.650052   31467 pod_ready.go:38] duration metric: took 1.60175468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:16:49.650070   31467 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:16:49.650128   31467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:16:49.662966   31467 system_svc.go:56] duration metric: took 12.889231ms WaitForService to wait for kubelet.
	I0116 23:16:49.662993   31467 kubeadm.go:581] duration metric: took 1.634295654s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:16:49.663016   31467 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:16:49.846413   31467 request.go:629] Waited for 183.329979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.50:8443/api/v1/nodes
	I0116 23:16:49.846480   31467 round_trippers.go:463] GET https://192.168.39.50:8443/api/v1/nodes
	I0116 23:16:49.846486   31467 round_trippers.go:469] Request Headers:
	I0116 23:16:49.846493   31467 round_trippers.go:473]     Accept: application/json, */*
	I0116 23:16:49.846500   31467 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 23:16:49.849741   31467 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 23:16:49.849769   31467 round_trippers.go:577] Response Headers:
	I0116 23:16:49.849780   31467 round_trippers.go:580]     Audit-Id: f2ae8183-3280-4e0e-b830-a94e0d69a5bf
	I0116 23:16:49.849789   31467 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 23:16:49.849798   31467 round_trippers.go:580]     Content-Type: application/json
	I0116 23:16:49.849807   31467 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c9ad0f4d-281d-4113-8e2a-bb85265e1c58
	I0116 23:16:49.849816   31467 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1c9b365d-f04f-470c-ad65-d4951fa01ef0
	I0116 23:16:49.849826   31467 round_trippers.go:580]     Date: Tue, 16 Jan 2024 23:16:49 GMT
	I0116 23:16:49.850805   31467 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1247"},"items":[{"metadata":{"name":"multinode-328490","uid":"57b77f0c-c4fb-4878-a91e-5306c80752c8","resourceVersion":"909","creationTimestamp":"2024-01-16T23:01:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-328490","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d44f2747221f24f9b150997f249dc925fca3b3e2","minikube.k8s.io/name":"multinode-328490","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T23_01_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16467 chars]
	I0116 23:16:49.851409   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:16:49.851430   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:16:49.851440   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:16:49.851444   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:16:49.851448   31467 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:16:49.851451   31467 node_conditions.go:123] node cpu capacity is 2
	I0116 23:16:49.851455   31467 node_conditions.go:105] duration metric: took 188.435226ms to run NodePressure ...
	I0116 23:16:49.851465   31467 start.go:228] waiting for startup goroutines ...
	I0116 23:16:49.851482   31467 start.go:242] writing updated cluster config ...
	I0116 23:16:49.851742   31467 ssh_runner.go:195] Run: rm -f paused
	I0116 23:16:49.898155   31467 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 23:16:49.901209   31467 out.go:177] * Done! kubectl is now configured to use "multinode-328490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:12:32 UTC, ends at Tue 2024-01-16 23:16:51 UTC. --
	Jan 16 23:16:50 multinode-328490 crio[707]: time="2024-01-16 23:16:50.966661075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705447010966648514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c9d67c94-356a-408f-a012-b562792f588c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:50 multinode-328490 crio[707]: time="2024-01-16 23:16:50.967224871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85fbbef9-5bbc-4876-bb2b-f1a28a194c5e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:50 multinode-328490 crio[707]: time="2024-01-16 23:16:50.967291567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85fbbef9-5bbc-4876-bb2b-f1a28a194c5e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:50 multinode-328490 crio[707]: time="2024-01-16 23:16:50.967551519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2efca75a71110504debc1ab5e245062b7574a74cd7977685dfbc1d07b769d6e,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705446816814291207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e50c9c399d5f716d4c7012e19a2c4129a97c95cb9fdd34064b34194b9d41ec,PodSandboxId:d5b70bd2069daa51135152853434b110f7dfd8c9f359b79a604a3ab81232517f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705446796437497491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-b7wdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22c6762c-c3f1-47f6-a02a-1936e80ca0c8,},Annotations:map[string]string{io.kubernetes.container.hash: bc49cc95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da350ec13f52c30f23e5d234dd46d50b2ccc7689e9e3f1a59d22bd176d369e54,PodSandboxId:7fe17c302b5ab31cf81ccfa08fd03fc0f75880ae963e5716be97a49af0090430,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705446794154768418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7lcpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5cd6ef-7b39-48aa-b234-13dda7343591,},Annotations:map[string]string{io.kubernetes.container.hash: 3e716390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85380536206a86b3db78a89be987215b2b4e15e0bcf1b4267ecf30f58bee271c,PodSandboxId:e124734c5a51c4d79bc8cb481e7d089fe3b576f6f8f59b8b974c49293de63d25,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705446788998260351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7s7p2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d5e4026d-cf51-44ae-9fd4-2467d26183a3,},Annotations:map[string]string{io.kubernetes.container.hash: aaf3de22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1c91609ff2fcd0a566fee089995991c383f41047d307e37be3fc94a46bdf36,PodSandboxId:3a1b030b2c9f209603c1d1b88de2609a22494b7aa62d08d1df971aaa2c98d3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705446786649489043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vmdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba882fac-57b9-4e3a-afc5-09f016
f542bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8213eccb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bcdaf9d3668e8670b86a0ab914349e224025329d5bf46bee1fd0ed6818eafac,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705446786365125534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e
3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ef0a78423c7f6ce56c4ad01d81d6e0853d6aa4a4f16fd9093ea462583d0ca3,PodSandboxId:7cc9bc5c9a2d21e883d59a4d90617bf7620728febb50eea83366df69a34f84db,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705446780942563126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9ab02260669aa70754e50c2f9d65,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8c32798f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efbe8177df88f48093367682d12f088b02b2e287e5cd45efb973dece05dfacc4,PodSandboxId:3a340264187cad3ec9413329ccf96c0ca1d29e3dd3eec27d2f015bbc3ca4e1e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705446781008518802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d187ff6e878e54bc7813dae6e0b674,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc4375842d95b51740c7e75f09f701ec9be5c72ca919b8ee4244da5c57e5136,PodSandboxId:18d8a7a4ab292c4cb25f9a3072e5c42ef34f9fad832f73146be3f57141c4867a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705446780685511978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02da3be7eefbbafb24bd659d19d0a46d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4f48048fff681d95a24be06af48ea34cd80a7950a5fcdf6838dfc03f0ef9bb,PodSandboxId:081273c16bd0309bd1528a48b6ce4c36a750adbbe0f37bdf0e1237481e17069d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705446780643997444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fca46a478051a968c54a441a292fd23,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85fbbef9-5bbc-4876-bb2b-f1a28a194c5e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.004854860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bd1c1b59-65fd-46d8-a24b-a7b029d05cd4 name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.004930147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bd1c1b59-65fd-46d8-a24b-a7b029d05cd4 name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.006000192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5d72067b-f450-491b-91df-77310d95780f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.006473720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705447011006458517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5d72067b-f450-491b-91df-77310d95780f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.007035040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ec6b9494-9363-46fc-970a-43d27c18a788 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.007086117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ec6b9494-9363-46fc-970a-43d27c18a788 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.007278906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2efca75a71110504debc1ab5e245062b7574a74cd7977685dfbc1d07b769d6e,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705446816814291207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e50c9c399d5f716d4c7012e19a2c4129a97c95cb9fdd34064b34194b9d41ec,PodSandboxId:d5b70bd2069daa51135152853434b110f7dfd8c9f359b79a604a3ab81232517f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705446796437497491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-b7wdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22c6762c-c3f1-47f6-a02a-1936e80ca0c8,},Annotations:map[string]string{io.kubernetes.container.hash: bc49cc95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da350ec13f52c30f23e5d234dd46d50b2ccc7689e9e3f1a59d22bd176d369e54,PodSandboxId:7fe17c302b5ab31cf81ccfa08fd03fc0f75880ae963e5716be97a49af0090430,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705446794154768418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7lcpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5cd6ef-7b39-48aa-b234-13dda7343591,},Annotations:map[string]string{io.kubernetes.container.hash: 3e716390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85380536206a86b3db78a89be987215b2b4e15e0bcf1b4267ecf30f58bee271c,PodSandboxId:e124734c5a51c4d79bc8cb481e7d089fe3b576f6f8f59b8b974c49293de63d25,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705446788998260351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7s7p2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d5e4026d-cf51-44ae-9fd4-2467d26183a3,},Annotations:map[string]string{io.kubernetes.container.hash: aaf3de22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1c91609ff2fcd0a566fee089995991c383f41047d307e37be3fc94a46bdf36,PodSandboxId:3a1b030b2c9f209603c1d1b88de2609a22494b7aa62d08d1df971aaa2c98d3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705446786649489043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vmdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba882fac-57b9-4e3a-afc5-09f016
f542bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8213eccb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bcdaf9d3668e8670b86a0ab914349e224025329d5bf46bee1fd0ed6818eafac,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705446786365125534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e
3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ef0a78423c7f6ce56c4ad01d81d6e0853d6aa4a4f16fd9093ea462583d0ca3,PodSandboxId:7cc9bc5c9a2d21e883d59a4d90617bf7620728febb50eea83366df69a34f84db,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705446780942563126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9ab02260669aa70754e50c2f9d65,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8c32798f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efbe8177df88f48093367682d12f088b02b2e287e5cd45efb973dece05dfacc4,PodSandboxId:3a340264187cad3ec9413329ccf96c0ca1d29e3dd3eec27d2f015bbc3ca4e1e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705446781008518802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d187ff6e878e54bc7813dae6e0b674,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc4375842d95b51740c7e75f09f701ec9be5c72ca919b8ee4244da5c57e5136,PodSandboxId:18d8a7a4ab292c4cb25f9a3072e5c42ef34f9fad832f73146be3f57141c4867a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705446780685511978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02da3be7eefbbafb24bd659d19d0a46d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4f48048fff681d95a24be06af48ea34cd80a7950a5fcdf6838dfc03f0ef9bb,PodSandboxId:081273c16bd0309bd1528a48b6ce4c36a750adbbe0f37bdf0e1237481e17069d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705446780643997444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fca46a478051a968c54a441a292fd23,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ec6b9494-9363-46fc-970a-43d27c18a788 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.044439961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fb09baeb-b255-4c0a-9bd8-5ce4e0626edc name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.044525815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fb09baeb-b255-4c0a-9bd8-5ce4e0626edc name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.046407121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=89553d76-eec0-4451-a380-b15bf6772afc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.046799925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705447011046786177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=89553d76-eec0-4451-a380-b15bf6772afc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.047578123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=41398e68-2608-4dc7-a8c2-fc17281bdea0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.047626616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=41398e68-2608-4dc7-a8c2-fc17281bdea0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.047839719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2efca75a71110504debc1ab5e245062b7574a74cd7977685dfbc1d07b769d6e,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705446816814291207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e50c9c399d5f716d4c7012e19a2c4129a97c95cb9fdd34064b34194b9d41ec,PodSandboxId:d5b70bd2069daa51135152853434b110f7dfd8c9f359b79a604a3ab81232517f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705446796437497491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-b7wdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22c6762c-c3f1-47f6-a02a-1936e80ca0c8,},Annotations:map[string]string{io.kubernetes.container.hash: bc49cc95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da350ec13f52c30f23e5d234dd46d50b2ccc7689e9e3f1a59d22bd176d369e54,PodSandboxId:7fe17c302b5ab31cf81ccfa08fd03fc0f75880ae963e5716be97a49af0090430,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705446794154768418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7lcpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5cd6ef-7b39-48aa-b234-13dda7343591,},Annotations:map[string]string{io.kubernetes.container.hash: 3e716390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85380536206a86b3db78a89be987215b2b4e15e0bcf1b4267ecf30f58bee271c,PodSandboxId:e124734c5a51c4d79bc8cb481e7d089fe3b576f6f8f59b8b974c49293de63d25,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705446788998260351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7s7p2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d5e4026d-cf51-44ae-9fd4-2467d26183a3,},Annotations:map[string]string{io.kubernetes.container.hash: aaf3de22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1c91609ff2fcd0a566fee089995991c383f41047d307e37be3fc94a46bdf36,PodSandboxId:3a1b030b2c9f209603c1d1b88de2609a22494b7aa62d08d1df971aaa2c98d3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705446786649489043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vmdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba882fac-57b9-4e3a-afc5-09f016
f542bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8213eccb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bcdaf9d3668e8670b86a0ab914349e224025329d5bf46bee1fd0ed6818eafac,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705446786365125534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e
3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ef0a78423c7f6ce56c4ad01d81d6e0853d6aa4a4f16fd9093ea462583d0ca3,PodSandboxId:7cc9bc5c9a2d21e883d59a4d90617bf7620728febb50eea83366df69a34f84db,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705446780942563126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9ab02260669aa70754e50c2f9d65,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8c32798f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efbe8177df88f48093367682d12f088b02b2e287e5cd45efb973dece05dfacc4,PodSandboxId:3a340264187cad3ec9413329ccf96c0ca1d29e3dd3eec27d2f015bbc3ca4e1e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705446781008518802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d187ff6e878e54bc7813dae6e0b674,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc4375842d95b51740c7e75f09f701ec9be5c72ca919b8ee4244da5c57e5136,PodSandboxId:18d8a7a4ab292c4cb25f9a3072e5c42ef34f9fad832f73146be3f57141c4867a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705446780685511978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02da3be7eefbbafb24bd659d19d0a46d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4f48048fff681d95a24be06af48ea34cd80a7950a5fcdf6838dfc03f0ef9bb,PodSandboxId:081273c16bd0309bd1528a48b6ce4c36a750adbbe0f37bdf0e1237481e17069d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705446780643997444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fca46a478051a968c54a441a292fd23,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=41398e68-2608-4dc7-a8c2-fc17281bdea0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.089535678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b7780d13-1816-4328-a96d-bebcb5f6c610 name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.089613703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b7780d13-1816-4328-a96d-bebcb5f6c610 name=/runtime.v1.RuntimeService/Version
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.091234009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5426157c-fd0f-42a6-a0ca-9beb7e072dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.091713492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705447011091698148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5426157c-fd0f-42a6-a0ca-9beb7e072dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.092567113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=397216b2-7abd-4e33-9d6d-a7f5986b51b8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.092909115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=397216b2-7abd-4e33-9d6d-a7f5986b51b8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 23:16:51 multinode-328490 crio[707]: time="2024-01-16 23:16:51.093231547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2efca75a71110504debc1ab5e245062b7574a74cd7977685dfbc1d07b769d6e,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705446816814291207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e50c9c399d5f716d4c7012e19a2c4129a97c95cb9fdd34064b34194b9d41ec,PodSandboxId:d5b70bd2069daa51135152853434b110f7dfd8c9f359b79a604a3ab81232517f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705446796437497491,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-b7wdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22c6762c-c3f1-47f6-a02a-1936e80ca0c8,},Annotations:map[string]string{io.kubernetes.container.hash: bc49cc95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da350ec13f52c30f23e5d234dd46d50b2ccc7689e9e3f1a59d22bd176d369e54,PodSandboxId:7fe17c302b5ab31cf81ccfa08fd03fc0f75880ae963e5716be97a49af0090430,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705446794154768418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7lcpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c5cd6ef-7b39-48aa-b234-13dda7343591,},Annotations:map[string]string{io.kubernetes.container.hash: 3e716390,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85380536206a86b3db78a89be987215b2b4e15e0bcf1b4267ecf30f58bee271c,PodSandboxId:e124734c5a51c4d79bc8cb481e7d089fe3b576f6f8f59b8b974c49293de63d25,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705446788998260351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7s7p2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d5e4026d-cf51-44ae-9fd4-2467d26183a3,},Annotations:map[string]string{io.kubernetes.container.hash: aaf3de22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1c91609ff2fcd0a566fee089995991c383f41047d307e37be3fc94a46bdf36,PodSandboxId:3a1b030b2c9f209603c1d1b88de2609a22494b7aa62d08d1df971aaa2c98d3dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705446786649489043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vmdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba882fac-57b9-4e3a-afc5-09f016
f542bf,},Annotations:map[string]string{io.kubernetes.container.hash: 8213eccb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bcdaf9d3668e8670b86a0ab914349e224025329d5bf46bee1fd0ed6818eafac,PodSandboxId:6a0a0c99d26ff494667b18d230c62405c5cf1df80a99dab10cf2b6eaf2ee9270,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705446786365125534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9895967-db72-4455-81be-1a2b274e
3a42,},Annotations:map[string]string{io.kubernetes.container.hash: 1012b8f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ef0a78423c7f6ce56c4ad01d81d6e0853d6aa4a4f16fd9093ea462583d0ca3,PodSandboxId:7cc9bc5c9a2d21e883d59a4d90617bf7620728febb50eea83366df69a34f84db,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705446780942563126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9ab02260669aa70754e50c2f9d65,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 8c32798f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efbe8177df88f48093367682d12f088b02b2e287e5cd45efb973dece05dfacc4,PodSandboxId:3a340264187cad3ec9413329ccf96c0ca1d29e3dd3eec27d2f015bbc3ca4e1e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705446781008518802,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2d187ff6e878e54bc7813dae6e0b674,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc4375842d95b51740c7e75f09f701ec9be5c72ca919b8ee4244da5c57e5136,PodSandboxId:18d8a7a4ab292c4cb25f9a3072e5c42ef34f9fad832f73146be3f57141c4867a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705446780685511978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02da3be7eefbbafb24bd659d19d0a46d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb4f48048fff681d95a24be06af48ea34cd80a7950a5fcdf6838dfc03f0ef9bb,PodSandboxId:081273c16bd0309bd1528a48b6ce4c36a750adbbe0f37bdf0e1237481e17069d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705446780643997444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-328490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fca46a478051a968c54a441a292fd23,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: dfe7758e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=397216b2-7abd-4e33-9d6d-a7f5986b51b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2efca75a7111       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   6a0a0c99d26ff       storage-provisioner
	14e50c9c399d5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   d5b70bd2069da       busybox-5b5d89c9d6-b7wdd
	da350ec13f52c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   7fe17c302b5ab       coredns-5dd5756b68-7lcpl
	85380536206a8       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   e124734c5a51c       kindnet-7s7p2
	6a1c91609ff2f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   3a1b030b2c9f2       kube-proxy-6vmdk
	0bcdaf9d3668e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   6a0a0c99d26ff       storage-provisioner
	efbe8177df88f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   3a340264187ca       kube-scheduler-multinode-328490
	b2ef0a78423c7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   7cc9bc5c9a2d2       etcd-multinode-328490
	acc4375842d95       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   18d8a7a4ab292       kube-controller-manager-multinode-328490
	cb4f48048fff6       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   081273c16bd03       kube-apiserver-multinode-328490
	
	
	==> coredns [da350ec13f52c30f23e5d234dd46d50b2ccc7689e9e3f1a59d22bd176d369e54] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56855 - 5267 "HINFO IN 2702248110680655156.804083665873773769. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009941845s
	
	
	==> describe nodes <==
	Name:               multinode-328490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=multinode-328490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_01_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328490
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 23:16:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 23:13:36 +0000   Tue, 16 Jan 2024 23:01:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 23:13:36 +0000   Tue, 16 Jan 2024 23:01:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 23:13:36 +0000   Tue, 16 Jan 2024 23:01:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 23:13:36 +0000   Tue, 16 Jan 2024 23:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    multinode-328490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1b6f6dbdc39496b98ce989fbfcabfbd
	  System UUID:                f1b6f6db-dc39-496b-98ce-989fbfcabfbd
	  Boot ID:                    7012da42-5b6a-47eb-b550-2cd760364301
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-b7wdd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-7lcpl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-328490                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7s7p2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-328490             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-328490    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6vmdk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-328490             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-328490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-328490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-328490 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-328490 event: Registered Node multinode-328490 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-328490 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-328490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-328490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-328490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-328490 event: Registered Node multinode-328490 in Controller
	
	
	Name:               multinode-328490-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328490-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=multinode-328490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T23_16_47_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:15:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-328490-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 23:16:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 23:15:03 +0000   Tue, 16 Jan 2024 23:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 23:15:03 +0000   Tue, 16 Jan 2024 23:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 23:15:03 +0000   Tue, 16 Jan 2024 23:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 23:15:03 +0000   Tue, 16 Jan 2024 23:15:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    multinode-328490-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f3f2c903aac4ea2942bfb7204cdcfbf
	  System UUID:                5f3f2c90-3aac-4ea2-942b-fb7204cdcfbf
	  Boot ID:                    e47f1c23-3fe4-4839-8fab-8f39c19c1407
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-xm58p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-d8kbq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bqt7h            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-328490-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-328490-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-328490-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-328490-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                  kubelet          Node multinode-328490-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m14s (x2 over 3m14s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       111s                   kubelet          Node multinode-328490-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 108s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet          Node multinode-328490-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet          Node multinode-328490-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet          Node multinode-328490-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                   kubelet          Node multinode-328490-m02 status is now: NodeReady
	  Normal   RegisteredNode           103s                   node-controller  Node multinode-328490-m02 event: Registered Node multinode-328490-m02 in Controller
	
	
	Name:               multinode-328490-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-328490-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=multinode-328490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T23_16_47_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:16:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-328490-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 23:16:47 +0000   Tue, 16 Jan 2024 23:16:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 23:16:47 +0000   Tue, 16 Jan 2024 23:16:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 23:16:47 +0000   Tue, 16 Jan 2024 23:16:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 23:16:47 +0000   Tue, 16 Jan 2024 23:16:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    multinode-328490-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9be832b062246489e57fe88a3d3e8ea
	  System UUID:                a9be832b-0622-4648-9e57-fe88a3d3e8ea
	  Boot ID:                    4eb406b0-01ed-4b9a-bf50-be117f9e6d89
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-w4m44    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-ngl9m               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-tc46j            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 2s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-328490-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-328490-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-328490-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-328490-m03 status is now: NodeReady
	  Normal   NodeNotReady             73s                 kubelet     Node multinode-328490-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        41s (x2 over 101s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-328490-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-328490-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                4s                  kubelet     Node multinode-328490-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan16 23:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062412] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.258506] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.720669] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.124078] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.503895] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.972304] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.101075] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.150363] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.113290] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.205147] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +16.669801] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[Jan16 23:13] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [b2ef0a78423c7f6ce56c4ad01d81d6e0853d6aa4a4f16fd9093ea462583d0ca3] <==
	{"level":"info","ts":"2024-01-16T23:13:02.877571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T23:13:02.877597Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T23:13:02.877812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c switched to configuration voters=(16941950758946187852)"}
	{"level":"info","ts":"2024-01-16T23:13:02.877876Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","added-peer-id":"eb1de673f525aa4c","added-peer-peer-urls":["https://192.168.39.50:2380"]}
	{"level":"info","ts":"2024-01-16T23:13:02.877987Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:13:02.878029Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:13:02.883605Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T23:13:02.883798Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eb1de673f525aa4c","initial-advertise-peer-urls":["https://192.168.39.50:2380"],"listen-peer-urls":["https://192.168.39.50:2380"],"advertise-client-urls":["https://192.168.39.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T23:13:02.88384Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T23:13:02.883933Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T23:13:02.883956Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-01-16T23:13:03.962693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-16T23:13:03.962798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-16T23:13:03.962833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgPreVoteResp from eb1de673f525aa4c at term 2"}
	{"level":"info","ts":"2024-01-16T23:13:03.962864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became candidate at term 3"}
	{"level":"info","ts":"2024-01-16T23:13:03.962888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgVoteResp from eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-01-16T23:13:03.962914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became leader at term 3"}
	{"level":"info","ts":"2024-01-16T23:13:03.96294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eb1de673f525aa4c elected leader eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-01-16T23:13:03.967408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:13:03.967415Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eb1de673f525aa4c","local-member-attributes":"{Name:multinode-328490 ClientURLs:[https://192.168.39.50:2379]}","request-path":"/0/members/eb1de673f525aa4c/attributes","cluster-id":"c4909210040256fc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T23:13:03.967714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:13:03.968671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.50:2379"}
	{"level":"info","ts":"2024-01-16T23:13:03.968895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T23:13:03.969083Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T23:13:03.969116Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:16:51 up 4 min,  0 users,  load average: 0.39, 0.38, 0.17
	Linux multinode-328490 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [85380536206a86b3db78a89be987215b2b4e15e0bcf1b4267ecf30f58bee271c] <==
	I0116 23:16:20.451680       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 23:16:20.451975       1 main.go:227] handling current node
	I0116 23:16:20.452005       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0116 23:16:20.452046       1 main.go:250] Node multinode-328490-m02 has CIDR [10.244.1.0/24] 
	I0116 23:16:20.452175       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0116 23:16:20.452200       1 main.go:250] Node multinode-328490-m03 has CIDR [10.244.3.0/24] 
	I0116 23:16:30.464902       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 23:16:30.465003       1 main.go:227] handling current node
	I0116 23:16:30.465028       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0116 23:16:30.465047       1 main.go:250] Node multinode-328490-m02 has CIDR [10.244.1.0/24] 
	I0116 23:16:30.465162       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0116 23:16:30.465183       1 main.go:250] Node multinode-328490-m03 has CIDR [10.244.3.0/24] 
	I0116 23:16:40.473405       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 23:16:40.473552       1 main.go:227] handling current node
	I0116 23:16:40.473585       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0116 23:16:40.473671       1 main.go:250] Node multinode-328490-m02 has CIDR [10.244.1.0/24] 
	I0116 23:16:40.473966       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0116 23:16:40.474079       1 main.go:250] Node multinode-328490-m03 has CIDR [10.244.3.0/24] 
	I0116 23:16:50.483387       1 main.go:223] Handling node with IPs: map[192.168.39.50:{}]
	I0116 23:16:50.483610       1 main.go:227] handling current node
	I0116 23:16:50.483656       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0116 23:16:50.483686       1 main.go:250] Node multinode-328490-m02 has CIDR [10.244.1.0/24] 
	I0116 23:16:50.483855       1 main.go:223] Handling node with IPs: map[192.168.39.157:{}]
	I0116 23:16:50.483889       1 main.go:250] Node multinode-328490-m03 has CIDR [10.244.2.0/24] 
	I0116 23:16:50.483971       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.157 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [cb4f48048fff681d95a24be06af48ea34cd80a7950a5fcdf6838dfc03f0ef9bb] <==
	I0116 23:13:05.306620       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0116 23:13:05.306571       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0116 23:13:05.306563       1 establishing_controller.go:76] Starting EstablishingController
	I0116 23:13:05.434088       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 23:13:05.509141       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 23:13:05.509720       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 23:13:05.511647       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 23:13:05.512448       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 23:13:05.512537       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 23:13:05.512551       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 23:13:05.512606       1 aggregator.go:166] initial CRD sync complete...
	I0116 23:13:05.512640       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 23:13:05.512647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 23:13:05.512653       1 cache.go:39] Caches are synced for autoregister controller
	I0116 23:13:05.513716       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 23:13:05.513750       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0116 23:13:05.534706       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0116 23:13:06.317136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 23:13:08.093251       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 23:13:08.224914       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 23:13:08.238281       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 23:13:08.303976       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 23:13:08.310501       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 23:13:18.222257       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 23:13:18.229062       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [acc4375842d95b51740c7e75f09f701ec9be5c72ca919b8ee4244da5c57e5136] <==
	I0116 23:15:03.442224       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m03"
	I0116 23:15:03.442403       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-328490-m02\" does not exist"
	I0116 23:15:03.444735       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-dcshd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-dcshd"
	I0116 23:15:03.455782       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-328490-m02" podCIDRs=["10.244.1.0/24"]
	I0116 23:15:03.479415       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m02"
	I0116 23:15:04.375783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.816µs"
	I0116 23:15:08.160241       1 event.go:307] "Event occurred" object="multinode-328490-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-328490-m02 event: Registered Node multinode-328490-m02 in Controller"
	I0116 23:15:17.623972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.979µs"
	I0116 23:15:18.214650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="102.412µs"
	I0116 23:15:18.217040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.288µs"
	I0116 23:15:38.628897       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m02"
	I0116 23:16:43.329613       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-xm58p"
	I0116 23:16:43.346510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="28.510669ms"
	I0116 23:16:43.359897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.181ms"
	I0116 23:16:43.360025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.36µs"
	I0116 23:16:43.366265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.563µs"
	I0116 23:16:44.480287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.988832ms"
	I0116 23:16:44.480437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.448µs"
	I0116 23:16:46.344649       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m02"
	I0116 23:16:46.946286       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-328490-m03\" does not exist"
	I0116 23:16:46.946591       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m02"
	I0116 23:16:46.946979       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w4m44" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-w4m44"
	I0116 23:16:46.976408       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-328490-m03" podCIDRs=["10.244.2.0/24"]
	I0116 23:16:47.301925       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-328490-m03"
	I0116 23:16:47.911126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.716µs"
	
	
	==> kube-proxy [6a1c91609ff2fcd0a566fee089995991c383f41047d307e37be3fc94a46bdf36] <==
	I0116 23:13:06.863391       1 server_others.go:69] "Using iptables proxy"
	I0116 23:13:06.874173       1 node.go:141] Successfully retrieved node IP: 192.168.39.50
	I0116 23:13:06.912668       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 23:13:06.912713       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 23:13:06.915244       1 server_others.go:152] "Using iptables Proxier"
	I0116 23:13:06.915435       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 23:13:06.915876       1 server.go:846] "Version info" version="v1.28.4"
	I0116 23:13:06.915903       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:13:06.916724       1 config.go:188] "Starting service config controller"
	I0116 23:13:06.916771       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 23:13:06.916794       1 config.go:97] "Starting endpoint slice config controller"
	I0116 23:13:06.916797       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 23:13:06.918877       1 config.go:315] "Starting node config controller"
	I0116 23:13:06.918925       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 23:13:07.017657       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 23:13:07.017667       1 shared_informer.go:318] Caches are synced for service config
	I0116 23:13:07.019615       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [efbe8177df88f48093367682d12f088b02b2e287e5cd45efb973dece05dfacc4] <==
	W0116 23:13:05.434002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 23:13:05.434032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 23:13:05.434107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:13:05.434205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 23:13:05.434431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 23:13:05.434483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 23:13:05.434551       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:13:05.434581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 23:13:05.434694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:13:05.434723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 23:13:05.434798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 23:13:05.434825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 23:13:05.434910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 23:13:05.434939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 23:13:05.435010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:13:05.435037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 23:13:05.435114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:13:05.435142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 23:13:05.435207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 23:13:05.435235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 23:13:05.435294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 23:13:05.438486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:13:05.438607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:13:05.438540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0116 23:13:06.315431       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:12:32 UTC, ends at Tue 2024-01-16 23:16:51 UTC. --
	Jan 16 23:13:07 multinode-328490 kubelet[913]: E0116 23:13:07.245559     913 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:07 multinode-328490 kubelet[913]: E0116 23:13:07.245597     913 projected.go:198] Error preparing data for projected volume kube-api-access-5l746 for pod default/busybox-5b5d89c9d6-b7wdd: object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:07 multinode-328490 kubelet[913]: E0116 23:13:07.245650     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22c6762c-c3f1-47f6-a02a-1936e80ca0c8-kube-api-access-5l746 podName:22c6762c-c3f1-47f6-a02a-1936e80ca0c8 nodeName:}" failed. No retries permitted until 2024-01-16 23:13:09.245636202 +0000 UTC m=+9.889247803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5l746" (UniqueName: "kubernetes.io/projected/22c6762c-c3f1-47f6-a02a-1936e80ca0c8-kube-api-access-5l746") pod "busybox-5b5d89c9d6-b7wdd" (UID: "22c6762c-c3f1-47f6-a02a-1936e80ca0c8") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:07 multinode-328490 kubelet[913]: E0116 23:13:07.610550     913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-7lcpl" podUID="2c5cd6ef-7b39-48aa-b234-13dda7343591"
	Jan 16 23:13:08 multinode-328490 kubelet[913]: E0116 23:13:08.610042     913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-b7wdd" podUID="22c6762c-c3f1-47f6-a02a-1936e80ca0c8"
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.156781     913 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.156876     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c5cd6ef-7b39-48aa-b234-13dda7343591-config-volume podName:2c5cd6ef-7b39-48aa-b234-13dda7343591 nodeName:}" failed. No retries permitted until 2024-01-16 23:13:13.156858155 +0000 UTC m=+13.800469754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2c5cd6ef-7b39-48aa-b234-13dda7343591-config-volume") pod "coredns-5dd5756b68-7lcpl" (UID: "2c5cd6ef-7b39-48aa-b234-13dda7343591") : object "kube-system"/"coredns" not registered
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.257616     913 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.257675     913 projected.go:198] Error preparing data for projected volume kube-api-access-5l746 for pod default/busybox-5b5d89c9d6-b7wdd: object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.257738     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22c6762c-c3f1-47f6-a02a-1936e80ca0c8-kube-api-access-5l746 podName:22c6762c-c3f1-47f6-a02a-1936e80ca0c8 nodeName:}" failed. No retries permitted until 2024-01-16 23:13:13.257714162 +0000 UTC m=+13.901325762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5l746" (UniqueName: "kubernetes.io/projected/22c6762c-c3f1-47f6-a02a-1936e80ca0c8-kube-api-access-5l746") pod "busybox-5b5d89c9d6-b7wdd" (UID: "22c6762c-c3f1-47f6-a02a-1936e80ca0c8") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 23:13:09 multinode-328490 kubelet[913]: E0116 23:13:09.610093     913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-7lcpl" podUID="2c5cd6ef-7b39-48aa-b234-13dda7343591"
	Jan 16 23:13:10 multinode-328490 kubelet[913]: I0116 23:13:10.323388     913 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 23:13:36 multinode-328490 kubelet[913]: I0116 23:13:36.786506     913 scope.go:117] "RemoveContainer" containerID="0bcdaf9d3668e8670b86a0ab914349e224025329d5bf46bee1fd0ed6818eafac"
	Jan 16 23:13:59 multinode-328490 kubelet[913]: E0116 23:13:59.628593     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 23:13:59 multinode-328490 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 23:13:59 multinode-328490 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 23:13:59 multinode-328490 kubelet[913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 23:14:59 multinode-328490 kubelet[913]: E0116 23:14:59.637083     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 23:14:59 multinode-328490 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 23:14:59 multinode-328490 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 23:14:59 multinode-328490 kubelet[913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 23:15:59 multinode-328490 kubelet[913]: E0116 23:15:59.631815     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 23:15:59 multinode-328490 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 23:15:59 multinode-328490 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 23:15:59 multinode-328490 kubelet[913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-328490 -n multinode-328490
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-328490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 stop
E0116 23:18:31.442425   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328490 stop: exit status 82 (2m1.177660626s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328490"  ...
	* Stopping node "multinode-328490"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-328490 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328490 status: exit status 3 (18.697223754s)

                                                
                                                
-- stdout --
	multinode-328490
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-328490-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:19:13.934612   34217 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	E0116 23:19:13.934653   34217 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-328490 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328490 -n multinode-328490
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328490 -n multinode-328490: exit status 3 (3.17451878s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:19:17.262766   34310 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	E0116 23:19:17.262787   34310 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-328490" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.05s)

                                                
                                    
x
+
TestPreload (291.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-225106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 23:28:31.443099   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:29:04.014179   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-225106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m27.591188539s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-225106 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-225106 image pull gcr.io/k8s-minikube/busybox: (2.586772281s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-225106
E0116 23:30:47.136547   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:31:00.967744   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:31:34.490748   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-225106: exit status 82 (2m1.425629218s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-225106"  ...
	* Stopping node "test-preload-225106"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-225106 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-16 23:31:59.75816142 +0000 UTC m=+3342.973665228
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-225106 -n test-preload-225106
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-225106 -n test-preload-225106: exit status 3 (18.564743913s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:32:18.318673   37289 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host
	E0116 23:32:18.318695   37289 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.155:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-225106" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-225106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-225106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-225106: (1.099353573s)
--- FAIL: TestPreload (291.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-771669 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-771669 --alsologtostderr -v=3: exit status 82 (2m1.606277565s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-771669"  ...
	* Stopping node "old-k8s-version-771669"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:46:53.157728   58743 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:46:53.157898   58743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:46:53.157906   58743 out.go:309] Setting ErrFile to fd 2...
	I0116 23:46:53.157913   58743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:46:53.158260   58743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:46:53.158637   58743 out.go:303] Setting JSON to false
	I0116 23:46:53.158750   58743 mustload.go:65] Loading cluster: old-k8s-version-771669
	I0116 23:46:53.159242   58743 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:46:53.159354   58743 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:46:53.239851   58743 mustload.go:65] Loading cluster: old-k8s-version-771669
	I0116 23:46:53.240102   58743 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:46:53.240155   58743 stop.go:39] StopHost: old-k8s-version-771669
	I0116 23:46:53.240813   58743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:46:53.240880   58743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:46:53.256742   58743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0116 23:46:53.257280   58743 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:46:53.257992   58743 main.go:141] libmachine: Using API Version  1
	I0116 23:46:53.258014   58743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:46:53.258309   58743 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:46:53.260764   58743 out.go:177] * Stopping node "old-k8s-version-771669"  ...
	I0116 23:46:53.262677   58743 main.go:141] libmachine: Stopping "old-k8s-version-771669"...
	I0116 23:46:53.262699   58743 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:46:53.264707   58743 main.go:141] libmachine: (old-k8s-version-771669) Calling .Stop
	I0116 23:46:53.267798   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 0/60
	I0116 23:46:54.269937   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 1/60
	I0116 23:46:55.271410   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 2/60
	I0116 23:46:56.273017   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 3/60
	I0116 23:46:57.274576   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 4/60
	I0116 23:46:58.276448   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 5/60
	I0116 23:46:59.278058   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 6/60
	I0116 23:47:00.279626   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 7/60
	I0116 23:47:01.281057   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 8/60
	I0116 23:47:02.282373   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 9/60
	I0116 23:47:03.284290   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 10/60
	I0116 23:47:04.285602   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 11/60
	I0116 23:47:05.287065   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 12/60
	I0116 23:47:06.289017   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 13/60
	I0116 23:47:07.290836   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 14/60
	I0116 23:47:08.292536   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 15/60
	I0116 23:47:09.293970   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 16/60
	I0116 23:47:10.295441   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 17/60
	I0116 23:47:11.296792   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 18/60
	I0116 23:47:12.298363   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 19/60
	I0116 23:47:13.299977   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 20/60
	I0116 23:47:14.301633   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 21/60
	I0116 23:47:15.302987   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 22/60
	I0116 23:47:16.304832   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 23/60
	I0116 23:47:17.307161   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 24/60
	I0116 23:47:18.309019   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 25/60
	I0116 23:47:19.310883   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 26/60
	I0116 23:47:20.312718   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 27/60
	I0116 23:47:21.314322   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 28/60
	I0116 23:47:22.315761   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 29/60
	I0116 23:47:23.317574   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 30/60
	I0116 23:47:24.319416   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 31/60
	I0116 23:47:25.321210   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 32/60
	I0116 23:47:26.322823   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 33/60
	I0116 23:47:27.324797   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 34/60
	I0116 23:47:28.327055   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 35/60
	I0116 23:47:29.328640   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 36/60
	I0116 23:47:30.330299   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 37/60
	I0116 23:47:31.331705   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 38/60
	I0116 23:47:32.333034   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 39/60
	I0116 23:47:33.334925   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 40/60
	I0116 23:47:34.336972   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 41/60
	I0116 23:47:35.338451   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 42/60
	I0116 23:47:36.340131   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 43/60
	I0116 23:47:37.341854   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 44/60
	I0116 23:47:38.343682   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 45/60
	I0116 23:47:39.345661   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 46/60
	I0116 23:47:40.347446   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 47/60
	I0116 23:47:41.348659   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 48/60
	I0116 23:47:42.350145   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 49/60
	I0116 23:47:43.352206   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 50/60
	I0116 23:47:44.353654   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 51/60
	I0116 23:47:45.355563   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 52/60
	I0116 23:47:46.356887   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 53/60
	I0116 23:47:47.358071   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 54/60
	I0116 23:47:48.359863   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 55/60
	I0116 23:47:49.361386   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 56/60
	I0116 23:47:50.362795   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 57/60
	I0116 23:47:51.364008   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 58/60
	I0116 23:47:52.365610   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 59/60
	I0116 23:47:53.366861   58743 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:47:53.366912   58743 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:47:53.366931   58743 retry.go:31] will retry after 1.170322632s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:47:54.537470   58743 stop.go:39] StopHost: old-k8s-version-771669
	I0116 23:47:54.537935   58743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:47:54.537986   58743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:47:54.552115   58743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0116 23:47:54.552555   58743 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:47:54.553092   58743 main.go:141] libmachine: Using API Version  1
	I0116 23:47:54.553121   58743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:47:54.553498   58743 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:47:54.555567   58743 out.go:177] * Stopping node "old-k8s-version-771669"  ...
	I0116 23:47:54.557331   58743 main.go:141] libmachine: Stopping "old-k8s-version-771669"...
	I0116 23:47:54.557353   58743 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:47:54.559148   58743 main.go:141] libmachine: (old-k8s-version-771669) Calling .Stop
	I0116 23:47:54.562695   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 0/60
	I0116 23:47:55.564003   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 1/60
	I0116 23:47:56.565281   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 2/60
	I0116 23:47:57.566513   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 3/60
	I0116 23:47:58.567979   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 4/60
	I0116 23:47:59.569751   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 5/60
	I0116 23:48:00.571122   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 6/60
	I0116 23:48:01.572446   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 7/60
	I0116 23:48:02.573878   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 8/60
	I0116 23:48:03.575184   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 9/60
	I0116 23:48:04.577097   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 10/60
	I0116 23:48:05.578582   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 11/60
	I0116 23:48:06.579996   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 12/60
	I0116 23:48:07.581425   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 13/60
	I0116 23:48:08.582859   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 14/60
	I0116 23:48:09.584840   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 15/60
	I0116 23:48:10.586150   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 16/60
	I0116 23:48:11.587649   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 17/60
	I0116 23:48:12.589437   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 18/60
	I0116 23:48:13.590653   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 19/60
	I0116 23:48:14.593505   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 20/60
	I0116 23:48:15.594775   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 21/60
	I0116 23:48:16.596154   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 22/60
	I0116 23:48:17.597515   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 23/60
	I0116 23:48:18.599242   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 24/60
	I0116 23:48:19.601111   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 25/60
	I0116 23:48:20.602381   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 26/60
	I0116 23:48:21.603704   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 27/60
	I0116 23:48:22.605007   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 28/60
	I0116 23:48:23.606375   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 29/60
	I0116 23:48:24.608130   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 30/60
	I0116 23:48:25.609675   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 31/60
	I0116 23:48:26.611021   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 32/60
	I0116 23:48:27.612696   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 33/60
	I0116 23:48:28.613754   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 34/60
	I0116 23:48:29.615122   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 35/60
	I0116 23:48:30.616617   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 36/60
	I0116 23:48:31.617676   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 37/60
	I0116 23:48:32.618851   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 38/60
	I0116 23:48:33.620596   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 39/60
	I0116 23:48:34.622152   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 40/60
	I0116 23:48:35.623252   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 41/60
	I0116 23:48:36.624740   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 42/60
	I0116 23:48:37.625702   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 43/60
	I0116 23:48:38.627025   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 44/60
	I0116 23:48:39.628711   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 45/60
	I0116 23:48:40.629863   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 46/60
	I0116 23:48:41.631199   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 47/60
	I0116 23:48:42.632845   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 48/60
	I0116 23:48:43.634137   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 49/60
	I0116 23:48:44.635938   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 50/60
	I0116 23:48:45.637085   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 51/60
	I0116 23:48:46.638232   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 52/60
	I0116 23:48:47.639274   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 53/60
	I0116 23:48:48.640493   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 54/60
	I0116 23:48:49.642078   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 55/60
	I0116 23:48:50.643268   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 56/60
	I0116 23:48:51.644438   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 57/60
	I0116 23:48:52.645858   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 58/60
	I0116 23:48:53.647365   58743 main.go:141] libmachine: (old-k8s-version-771669) Waiting for machine to stop 59/60
	I0116 23:48:54.648273   58743 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:48:54.648315   58743 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:48:54.650212   58743 out.go:177] 
	W0116 23:48:54.651447   58743 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 23:48:54.651463   58743 out.go:239] * 
	* 
	W0116 23:48:54.653710   58743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 23:48:54.655155   58743 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-771669 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
E0116 23:48:54.883595   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:48:58.721400   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:49:00.584037   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669: exit status 3 (18.446103853s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:13.102625   59428 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0116 23:49:13.102648   59428 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-771669" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-085322 --alsologtostderr -v=3
E0116 23:47:26.163539   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-085322 --alsologtostderr -v=3: exit status 82 (2m1.7980594s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-085322"  ...
	* Stopping node "no-preload-085322"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:47:25.960303   58985 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:47:25.960584   58985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:47:25.960594   58985 out.go:309] Setting ErrFile to fd 2...
	I0116 23:47:25.960599   58985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:47:25.960805   58985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:47:25.961071   58985 out.go:303] Setting JSON to false
	I0116 23:47:25.961170   58985 mustload.go:65] Loading cluster: no-preload-085322
	I0116 23:47:25.961614   58985 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:47:25.961689   58985 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:47:25.961854   58985 mustload.go:65] Loading cluster: no-preload-085322
	I0116 23:47:25.961956   58985 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:47:25.961981   58985 stop.go:39] StopHost: no-preload-085322
	I0116 23:47:25.962351   58985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:47:25.962412   58985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:47:25.977921   58985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0116 23:47:25.978419   58985 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:47:25.978994   58985 main.go:141] libmachine: Using API Version  1
	I0116 23:47:25.979037   58985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:47:25.979368   58985 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:47:25.982075   58985 out.go:177] * Stopping node "no-preload-085322"  ...
	I0116 23:47:25.983810   58985 main.go:141] libmachine: Stopping "no-preload-085322"...
	I0116 23:47:25.983831   58985 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:47:25.985600   58985 main.go:141] libmachine: (no-preload-085322) Calling .Stop
	I0116 23:47:25.989043   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 0/60
	I0116 23:47:26.990435   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 1/60
	I0116 23:47:27.991912   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 2/60
	I0116 23:47:28.993493   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 3/60
	I0116 23:47:29.995550   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 4/60
	I0116 23:47:30.997646   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 5/60
	I0116 23:47:31.999208   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 6/60
	I0116 23:47:33.001516   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 7/60
	I0116 23:47:34.002955   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 8/60
	I0116 23:47:35.004450   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 9/60
	I0116 23:47:36.006841   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 10/60
	I0116 23:47:37.008745   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 11/60
	I0116 23:47:38.009885   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 12/60
	I0116 23:47:39.011119   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 13/60
	I0116 23:47:40.012235   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 14/60
	I0116 23:47:41.013974   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 15/60
	I0116 23:47:42.015210   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 16/60
	I0116 23:47:43.016603   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 17/60
	I0116 23:47:44.017845   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 18/60
	I0116 23:47:45.019046   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 19/60
	I0116 23:47:46.021255   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 20/60
	I0116 23:47:47.022512   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 21/60
	I0116 23:47:48.023761   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 22/60
	I0116 23:47:49.024945   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 23/60
	I0116 23:47:50.026180   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 24/60
	I0116 23:47:51.027757   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 25/60
	I0116 23:47:52.028936   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 26/60
	I0116 23:47:53.030035   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 27/60
	I0116 23:47:54.031397   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 28/60
	I0116 23:47:55.033925   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 29/60
	I0116 23:47:56.035678   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 30/60
	I0116 23:47:57.037559   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 31/60
	I0116 23:47:58.038947   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 32/60
	I0116 23:47:59.040898   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 33/60
	I0116 23:48:00.042118   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 34/60
	I0116 23:48:01.044079   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 35/60
	I0116 23:48:02.045454   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 36/60
	I0116 23:48:03.046836   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 37/60
	I0116 23:48:04.048216   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 38/60
	I0116 23:48:05.049511   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 39/60
	I0116 23:48:06.051931   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 40/60
	I0116 23:48:07.053379   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 41/60
	I0116 23:48:08.054432   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 42/60
	I0116 23:48:09.055924   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 43/60
	I0116 23:48:10.057243   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 44/60
	I0116 23:48:11.059280   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 45/60
	I0116 23:48:12.060515   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 46/60
	I0116 23:48:13.061654   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 47/60
	I0116 23:48:14.062996   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 48/60
	I0116 23:48:15.064786   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 49/60
	I0116 23:48:16.067047   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 50/60
	I0116 23:48:17.068254   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 51/60
	I0116 23:48:18.069587   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 52/60
	I0116 23:48:19.070877   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 53/60
	I0116 23:48:20.072176   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 54/60
	I0116 23:48:21.074135   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 55/60
	I0116 23:48:22.075370   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 56/60
	I0116 23:48:23.076688   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 57/60
	I0116 23:48:24.078021   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 58/60
	I0116 23:48:25.079321   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 59/60
	I0116 23:48:26.080549   58985 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:48:26.080607   58985 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:48:26.080623   58985 retry.go:31] will retry after 1.489306687s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:48:27.571220   58985 stop.go:39] StopHost: no-preload-085322
	I0116 23:48:27.571580   58985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:48:27.571640   58985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:48:27.585744   58985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35077
	I0116 23:48:27.586233   58985 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:48:27.586690   58985 main.go:141] libmachine: Using API Version  1
	I0116 23:48:27.586710   58985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:48:27.587029   58985 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:48:27.590157   58985 out.go:177] * Stopping node "no-preload-085322"  ...
	I0116 23:48:27.591464   58985 main.go:141] libmachine: Stopping "no-preload-085322"...
	I0116 23:48:27.591478   58985 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:48:27.593086   58985 main.go:141] libmachine: (no-preload-085322) Calling .Stop
	I0116 23:48:27.596377   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 0/60
	I0116 23:48:28.597823   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 1/60
	I0116 23:48:29.599176   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 2/60
	I0116 23:48:30.602016   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 3/60
	I0116 23:48:31.603514   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 4/60
	I0116 23:48:32.605316   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 5/60
	I0116 23:48:33.606740   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 6/60
	I0116 23:48:34.608041   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 7/60
	I0116 23:48:35.609631   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 8/60
	I0116 23:48:36.610952   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 9/60
	I0116 23:48:37.613006   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 10/60
	I0116 23:48:38.614400   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 11/60
	I0116 23:48:39.615799   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 12/60
	I0116 23:48:40.617266   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 13/60
	I0116 23:48:41.618787   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 14/60
	I0116 23:48:42.620658   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 15/60
	I0116 23:48:43.622476   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 16/60
	I0116 23:48:44.624080   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 17/60
	I0116 23:48:45.625511   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 18/60
	I0116 23:48:46.626953   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 19/60
	I0116 23:48:47.628593   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 20/60
	I0116 23:48:48.630014   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 21/60
	I0116 23:48:49.631442   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 22/60
	I0116 23:48:50.632869   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 23/60
	I0116 23:48:51.634282   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 24/60
	I0116 23:48:52.636310   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 25/60
	I0116 23:48:53.637760   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 26/60
	I0116 23:48:54.639406   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 27/60
	I0116 23:48:55.640754   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 28/60
	I0116 23:48:56.642103   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 29/60
	I0116 23:48:57.644457   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 30/60
	I0116 23:48:58.645995   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 31/60
	I0116 23:48:59.647430   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 32/60
	I0116 23:49:00.648810   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 33/60
	I0116 23:49:01.650250   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 34/60
	I0116 23:49:02.652286   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 35/60
	I0116 23:49:03.653639   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 36/60
	I0116 23:49:04.655369   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 37/60
	I0116 23:49:05.656572   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 38/60
	I0116 23:49:06.658034   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 39/60
	I0116 23:49:07.659627   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 40/60
	I0116 23:49:08.661120   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 41/60
	I0116 23:49:09.662255   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 42/60
	I0116 23:49:10.663541   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 43/60
	I0116 23:49:11.664680   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 44/60
	I0116 23:49:12.666169   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 45/60
	I0116 23:49:13.667367   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 46/60
	I0116 23:49:14.668657   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 47/60
	I0116 23:49:15.669749   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 48/60
	I0116 23:49:16.670828   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 49/60
	I0116 23:49:17.672492   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 50/60
	I0116 23:49:18.673606   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 51/60
	I0116 23:49:19.674861   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 52/60
	I0116 23:49:20.675986   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 53/60
	I0116 23:49:21.677026   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 54/60
	I0116 23:49:22.678551   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 55/60
	I0116 23:49:23.680539   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 56/60
	I0116 23:49:24.681706   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 57/60
	I0116 23:49:25.683012   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 58/60
	I0116 23:49:26.684500   58985 main.go:141] libmachine: (no-preload-085322) Waiting for machine to stop 59/60
	I0116 23:49:27.685234   58985 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:49:27.685270   58985 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:49:27.686963   58985 out.go:177] 
	W0116 23:49:27.688176   58985 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 23:49:27.688194   58985 out.go:239] * 
	* 
	W0116 23:49:27.690498   58985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 23:49:27.691762   58985 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-085322 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322: exit status 3 (18.433556791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:46.126710   59663 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host
	E0116 23:49:46.126733   59663 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-085322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-837871 --alsologtostderr -v=3
E0116 23:47:38.080238   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:43.201054   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:44.086181   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:53.442259   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-837871 --alsologtostderr -v=3: exit status 82 (2m1.550991094s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-837871"  ...
	* Stopping node "embed-certs-837871"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:47:36.672098   59091 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:47:36.672278   59091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:47:36.672291   59091 out.go:309] Setting ErrFile to fd 2...
	I0116 23:47:36.672298   59091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:47:36.672529   59091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:47:36.672758   59091 out.go:303] Setting JSON to false
	I0116 23:47:36.672832   59091 mustload.go:65] Loading cluster: embed-certs-837871
	I0116 23:47:36.673194   59091 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:47:36.673258   59091 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:47:36.673456   59091 mustload.go:65] Loading cluster: embed-certs-837871
	I0116 23:47:36.673645   59091 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:47:36.673695   59091 stop.go:39] StopHost: embed-certs-837871
	I0116 23:47:36.674049   59091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:47:36.674095   59091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:47:36.688152   59091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I0116 23:47:36.688610   59091 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:47:36.689208   59091 main.go:141] libmachine: Using API Version  1
	I0116 23:47:36.689233   59091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:47:36.689552   59091 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:47:36.691943   59091 out.go:177] * Stopping node "embed-certs-837871"  ...
	I0116 23:47:36.693209   59091 main.go:141] libmachine: Stopping "embed-certs-837871"...
	I0116 23:47:36.693231   59091 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:47:36.694857   59091 main.go:141] libmachine: (embed-certs-837871) Calling .Stop
	I0116 23:47:36.698075   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 0/60
	I0116 23:47:37.699560   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 1/60
	I0116 23:47:38.700833   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 2/60
	I0116 23:47:39.702320   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 3/60
	I0116 23:47:40.703695   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 4/60
	I0116 23:47:41.705627   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 5/60
	I0116 23:47:42.707005   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 6/60
	I0116 23:47:43.708206   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 7/60
	I0116 23:47:44.709801   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 8/60
	I0116 23:47:45.711156   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 9/60
	I0116 23:47:46.713272   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 10/60
	I0116 23:47:47.714492   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 11/60
	I0116 23:47:48.715866   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 12/60
	I0116 23:47:49.717071   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 13/60
	I0116 23:47:50.718274   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 14/60
	I0116 23:47:51.719569   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 15/60
	I0116 23:47:52.720836   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 16/60
	I0116 23:47:53.722802   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 17/60
	I0116 23:47:54.724100   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 18/60
	I0116 23:47:55.725940   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 19/60
	I0116 23:47:56.727893   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 20/60
	I0116 23:47:57.729275   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 21/60
	I0116 23:47:58.730516   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 22/60
	I0116 23:47:59.732812   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 23/60
	I0116 23:48:00.734241   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 24/60
	I0116 23:48:01.736160   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 25/60
	I0116 23:48:02.737589   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 26/60
	I0116 23:48:03.739013   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 27/60
	I0116 23:48:04.740760   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 28/60
	I0116 23:48:05.742364   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 29/60
	I0116 23:48:06.744431   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 30/60
	I0116 23:48:07.745898   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 31/60
	I0116 23:48:08.747397   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 32/60
	I0116 23:48:09.748591   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 33/60
	I0116 23:48:10.750114   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 34/60
	I0116 23:48:11.751887   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 35/60
	I0116 23:48:12.753296   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 36/60
	I0116 23:48:13.754751   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 37/60
	I0116 23:48:14.756202   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 38/60
	I0116 23:48:15.757430   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 39/60
	I0116 23:48:16.759682   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 40/60
	I0116 23:48:17.761012   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 41/60
	I0116 23:48:18.762354   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 42/60
	I0116 23:48:19.763625   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 43/60
	I0116 23:48:20.765006   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 44/60
	I0116 23:48:21.766873   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 45/60
	I0116 23:48:22.768741   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 46/60
	I0116 23:48:23.770243   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 47/60
	I0116 23:48:24.771502   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 48/60
	I0116 23:48:25.772849   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 49/60
	I0116 23:48:26.774891   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 50/60
	I0116 23:48:27.776336   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 51/60
	I0116 23:48:28.777706   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 52/60
	I0116 23:48:29.779148   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 53/60
	I0116 23:48:30.780572   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 54/60
	I0116 23:48:31.782619   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 55/60
	I0116 23:48:32.784012   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 56/60
	I0116 23:48:33.785546   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 57/60
	I0116 23:48:34.787008   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 58/60
	I0116 23:48:35.788771   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 59/60
	I0116 23:48:36.790052   59091 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:48:36.790115   59091 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:48:36.790137   59091 retry.go:31] will retry after 1.243832286s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:48:38.034502   59091 stop.go:39] StopHost: embed-certs-837871
	I0116 23:48:38.034891   59091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:48:38.034931   59091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:48:38.049062   59091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0116 23:48:38.049474   59091 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:48:38.049948   59091 main.go:141] libmachine: Using API Version  1
	I0116 23:48:38.049979   59091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:48:38.050260   59091 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:48:38.052192   59091 out.go:177] * Stopping node "embed-certs-837871"  ...
	I0116 23:48:38.053394   59091 main.go:141] libmachine: Stopping "embed-certs-837871"...
	I0116 23:48:38.053408   59091 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:48:38.055299   59091 main.go:141] libmachine: (embed-certs-837871) Calling .Stop
	I0116 23:48:38.059075   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 0/60
	I0116 23:48:39.060519   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 1/60
	I0116 23:48:40.061957   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 2/60
	I0116 23:48:41.063554   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 3/60
	I0116 23:48:42.065081   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 4/60
	I0116 23:48:43.067170   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 5/60
	I0116 23:48:44.068729   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 6/60
	I0116 23:48:45.070132   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 7/60
	I0116 23:48:46.071474   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 8/60
	I0116 23:48:47.072845   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 9/60
	I0116 23:48:48.075031   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 10/60
	I0116 23:48:49.076595   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 11/60
	I0116 23:48:50.077959   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 12/60
	I0116 23:48:51.079453   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 13/60
	I0116 23:48:52.080991   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 14/60
	I0116 23:48:53.082923   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 15/60
	I0116 23:48:54.084477   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 16/60
	I0116 23:48:55.085971   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 17/60
	I0116 23:48:56.087421   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 18/60
	I0116 23:48:57.088875   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 19/60
	I0116 23:48:58.090727   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 20/60
	I0116 23:48:59.092217   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 21/60
	I0116 23:49:00.093769   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 22/60
	I0116 23:49:01.095317   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 23/60
	I0116 23:49:02.096858   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 24/60
	I0116 23:49:03.098811   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 25/60
	I0116 23:49:04.100293   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 26/60
	I0116 23:49:05.101845   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 27/60
	I0116 23:49:06.103252   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 28/60
	I0116 23:49:07.104645   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 29/60
	I0116 23:49:08.106376   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 30/60
	I0116 23:49:09.107937   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 31/60
	I0116 23:49:10.109379   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 32/60
	I0116 23:49:11.110797   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 33/60
	I0116 23:49:12.112251   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 34/60
	I0116 23:49:13.114119   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 35/60
	I0116 23:49:14.115514   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 36/60
	I0116 23:49:15.117049   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 37/60
	I0116 23:49:16.118297   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 38/60
	I0116 23:49:17.119834   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 39/60
	I0116 23:49:18.121667   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 40/60
	I0116 23:49:19.123202   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 41/60
	I0116 23:49:20.124825   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 42/60
	I0116 23:49:21.126352   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 43/60
	I0116 23:49:22.127725   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 44/60
	I0116 23:49:23.129396   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 45/60
	I0116 23:49:24.130780   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 46/60
	I0116 23:49:25.132409   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 47/60
	I0116 23:49:26.133786   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 48/60
	I0116 23:49:27.135320   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 49/60
	I0116 23:49:28.136633   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 50/60
	I0116 23:49:29.137926   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 51/60
	I0116 23:49:30.139382   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 52/60
	I0116 23:49:31.140705   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 53/60
	I0116 23:49:32.142208   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 54/60
	I0116 23:49:33.144033   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 55/60
	I0116 23:49:34.145393   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 56/60
	I0116 23:49:35.146877   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 57/60
	I0116 23:49:36.148777   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 58/60
	I0116 23:49:37.150475   59091 main.go:141] libmachine: (embed-certs-837871) Waiting for machine to stop 59/60
	I0116 23:49:38.151349   59091 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:49:38.151392   59091 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:49:38.153529   59091 out.go:177] 
	W0116 23:49:38.155426   59091 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 23:49:38.155449   59091 out.go:239] * 
	* 
	W0116 23:49:38.157721   59091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 23:49:38.159068   59091 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-837871 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
E0116 23:49:41.171722   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.177019   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.187307   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.207606   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.247896   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.328201   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.488801   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:41.545151   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:49:41.809274   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:42.450268   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:43.730918   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871: exit status 3 (18.462222108s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:56.622704   59733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0116 23:49:56.622724   59733 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-837871" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-967325 --alsologtostderr -v=3
E0116 23:48:13.923380   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:48:14.491092   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:48:19.621403   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.626659   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.636875   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.657154   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.697480   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.777868   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:19.938290   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:20.258911   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:20.899669   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:22.180743   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:24.741517   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:29.862525   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:31.442443   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:48:38.240522   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.245786   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.256015   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.276316   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.317234   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.397628   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.558081   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:38.878460   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:39.518809   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:40.103314   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:48:40.799350   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:43.360213   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:48:45.527597   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:48:48.480464   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-967325 --alsologtostderr -v=3: exit status 82 (2m1.588091116s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-967325"  ...
	* Stopping node "default-k8s-diff-port-967325"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:48:06.204070   59271 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:48:06.204318   59271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:48:06.204326   59271 out.go:309] Setting ErrFile to fd 2...
	I0116 23:48:06.204330   59271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:48:06.204552   59271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:48:06.204779   59271 out.go:303] Setting JSON to false
	I0116 23:48:06.204860   59271 mustload.go:65] Loading cluster: default-k8s-diff-port-967325
	I0116 23:48:06.205195   59271 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:48:06.205268   59271 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:48:06.205424   59271 mustload.go:65] Loading cluster: default-k8s-diff-port-967325
	I0116 23:48:06.205524   59271 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:48:06.205546   59271 stop.go:39] StopHost: default-k8s-diff-port-967325
	I0116 23:48:06.205914   59271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:48:06.205967   59271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:48:06.219788   59271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36377
	I0116 23:48:06.220252   59271 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:48:06.220832   59271 main.go:141] libmachine: Using API Version  1
	I0116 23:48:06.220857   59271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:48:06.221162   59271 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:48:06.223583   59271 out.go:177] * Stopping node "default-k8s-diff-port-967325"  ...
	I0116 23:48:06.225051   59271 main.go:141] libmachine: Stopping "default-k8s-diff-port-967325"...
	I0116 23:48:06.225074   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:48:06.226637   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Stop
	I0116 23:48:06.229676   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 0/60
	I0116 23:48:07.231081   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 1/60
	I0116 23:48:08.232657   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 2/60
	I0116 23:48:09.233936   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 3/60
	I0116 23:48:10.235306   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 4/60
	I0116 23:48:11.237484   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 5/60
	I0116 23:48:12.238949   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 6/60
	I0116 23:48:13.240218   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 7/60
	I0116 23:48:14.241609   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 8/60
	I0116 23:48:15.243068   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 9/60
	I0116 23:48:16.244207   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 10/60
	I0116 23:48:17.245573   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 11/60
	I0116 23:48:18.246976   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 12/60
	I0116 23:48:19.248328   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 13/60
	I0116 23:48:20.249623   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 14/60
	I0116 23:48:21.251096   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 15/60
	I0116 23:48:22.252504   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 16/60
	I0116 23:48:23.253929   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 17/60
	I0116 23:48:24.255205   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 18/60
	I0116 23:48:25.256464   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 19/60
	I0116 23:48:26.258574   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 20/60
	I0116 23:48:27.259978   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 21/60
	I0116 23:48:28.261338   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 22/60
	I0116 23:48:29.262700   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 23/60
	I0116 23:48:30.264030   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 24/60
	I0116 23:48:31.265971   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 25/60
	I0116 23:48:32.267269   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 26/60
	I0116 23:48:33.268871   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 27/60
	I0116 23:48:34.270135   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 28/60
	I0116 23:48:35.271604   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 29/60
	I0116 23:48:36.273895   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 30/60
	I0116 23:48:37.275261   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 31/60
	I0116 23:48:38.276906   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 32/60
	I0116 23:48:39.278298   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 33/60
	I0116 23:48:40.279876   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 34/60
	I0116 23:48:41.282062   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 35/60
	I0116 23:48:42.283251   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 36/60
	I0116 23:48:43.284848   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 37/60
	I0116 23:48:44.286210   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 38/60
	I0116 23:48:45.287890   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 39/60
	I0116 23:48:46.290280   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 40/60
	I0116 23:48:47.291649   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 41/60
	I0116 23:48:48.293041   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 42/60
	I0116 23:48:49.294497   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 43/60
	I0116 23:48:50.295931   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 44/60
	I0116 23:48:51.297913   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 45/60
	I0116 23:48:52.299342   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 46/60
	I0116 23:48:53.300704   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 47/60
	I0116 23:48:54.302167   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 48/60
	I0116 23:48:55.303634   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 49/60
	I0116 23:48:56.304904   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 50/60
	I0116 23:48:57.306593   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 51/60
	I0116 23:48:58.307865   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 52/60
	I0116 23:48:59.309177   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 53/60
	I0116 23:49:00.310517   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 54/60
	I0116 23:49:01.312280   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 55/60
	I0116 23:49:02.313661   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 56/60
	I0116 23:49:03.314911   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 57/60
	I0116 23:49:04.316807   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 58/60
	I0116 23:49:05.318128   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 59/60
	I0116 23:49:06.319518   59271 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:49:06.319561   59271 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:49:06.319579   59271 retry.go:31] will retry after 1.295721551s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:49:07.616000   59271 stop.go:39] StopHost: default-k8s-diff-port-967325
	I0116 23:49:07.616368   59271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:49:07.616420   59271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:49:07.630597   59271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0116 23:49:07.630980   59271 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:49:07.631452   59271 main.go:141] libmachine: Using API Version  1
	I0116 23:49:07.631475   59271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:49:07.631747   59271 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:49:07.633770   59271 out.go:177] * Stopping node "default-k8s-diff-port-967325"  ...
	I0116 23:49:07.635103   59271 main.go:141] libmachine: Stopping "default-k8s-diff-port-967325"...
	I0116 23:49:07.635116   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:49:07.636719   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Stop
	I0116 23:49:07.639986   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 0/60
	I0116 23:49:08.641317   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 1/60
	I0116 23:49:09.642740   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 2/60
	I0116 23:49:10.644147   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 3/60
	I0116 23:49:11.645597   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 4/60
	I0116 23:49:12.647443   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 5/60
	I0116 23:49:13.649109   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 6/60
	I0116 23:49:14.650706   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 7/60
	I0116 23:49:15.652118   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 8/60
	I0116 23:49:16.653579   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 9/60
	I0116 23:49:17.655744   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 10/60
	I0116 23:49:18.657346   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 11/60
	I0116 23:49:19.658689   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 12/60
	I0116 23:49:20.660396   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 13/60
	I0116 23:49:21.661680   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 14/60
	I0116 23:49:22.663507   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 15/60
	I0116 23:49:23.665059   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 16/60
	I0116 23:49:24.666465   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 17/60
	I0116 23:49:25.668769   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 18/60
	I0116 23:49:26.670241   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 19/60
	I0116 23:49:27.672096   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 20/60
	I0116 23:49:28.673443   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 21/60
	I0116 23:49:29.674904   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 22/60
	I0116 23:49:30.676200   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 23/60
	I0116 23:49:31.677511   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 24/60
	I0116 23:49:32.679111   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 25/60
	I0116 23:49:33.680565   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 26/60
	I0116 23:49:34.682067   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 27/60
	I0116 23:49:35.683599   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 28/60
	I0116 23:49:36.684853   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 29/60
	I0116 23:49:37.686644   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 30/60
	I0116 23:49:38.687985   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 31/60
	I0116 23:49:39.689290   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 32/60
	I0116 23:49:40.690672   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 33/60
	I0116 23:49:41.691898   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 34/60
	I0116 23:49:42.694227   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 35/60
	I0116 23:49:43.695592   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 36/60
	I0116 23:49:44.696936   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 37/60
	I0116 23:49:45.698171   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 38/60
	I0116 23:49:46.699588   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 39/60
	I0116 23:49:47.701431   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 40/60
	I0116 23:49:48.702851   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 41/60
	I0116 23:49:49.704356   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 42/60
	I0116 23:49:50.705629   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 43/60
	I0116 23:49:51.707038   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 44/60
	I0116 23:49:52.708708   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 45/60
	I0116 23:49:53.710043   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 46/60
	I0116 23:49:54.711093   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 47/60
	I0116 23:49:55.712487   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 48/60
	I0116 23:49:56.713587   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 49/60
	I0116 23:49:57.715344   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 50/60
	I0116 23:49:58.716570   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 51/60
	I0116 23:49:59.718223   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 52/60
	I0116 23:50:00.719557   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 53/60
	I0116 23:50:01.720844   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 54/60
	I0116 23:50:02.722629   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 55/60
	I0116 23:50:03.724092   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 56/60
	I0116 23:50:04.725405   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 57/60
	I0116 23:50:05.726865   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 58/60
	I0116 23:50:06.728777   59271 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for machine to stop 59/60
	I0116 23:50:07.729722   59271 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 23:50:07.729764   59271 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 23:50:07.731730   59271 out.go:177] 
	W0116 23:50:07.733038   59271 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 23:50:07.733054   59271 out.go:239] * 
	* 
	W0116 23:50:07.735198   59271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 23:50:07.737000   59271 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-967325 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325: exit status 3 (18.580342071s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:50:26.318657   60033 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0116 23:50:26.318676   60033 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-967325" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669: exit status 3 (3.200146469s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:16.302687   59510 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0116 23:49:16.302709   59510 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-771669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0116 23:49:19.202624   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-771669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152548338s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-771669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669: exit status 3 (3.06322728s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:25.518711   59581 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0116 23:49:25.518734   59581 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-771669" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
E0116 23:49:46.291754   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322: exit status 3 (3.167611268s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:49.294697   59774 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host
	E0116 23:49:49.294719   59774 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-085322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0116 23:49:51.412131   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:49:55.289902   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.295175   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.305425   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.325704   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.365982   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.446917   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-085322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153220881s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-085322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
E0116 23:49:55.607815   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:55.928338   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:49:56.569261   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322: exit status 3 (3.062628653s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:58.510694   59868 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host
	E0116 23:49:58.510714   59868 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-085322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
E0116 23:49:57.849733   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871: exit status 3 (3.16771443s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:49:59.790671   59897 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0116 23:49:59.790694   59897 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-837871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0116 23:50:00.163645   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:50:00.410024   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:50:01.652528   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:50:05.530790   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-837871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15224501s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-837871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
E0116 23:50:07.448217   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871: exit status 3 (3.063528426s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:50:09.006775   60003 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0116 23:50:09.006795   60003 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-837871" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325: exit status 3 (3.168112263s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:50:29.486606   60140 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0116 23:50:29.486635   60140 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-967325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0116 23:50:30.497175   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-967325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152168312s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-967325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
E0116 23:50:36.252743   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325: exit status 3 (3.063105178s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 23:50:38.702677   60227 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0116 23:50:38.702697   60227 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-967325" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 23:57:10.185244   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:57:23.604390   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:57:32.960623   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:58:19.621958   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:58:31.442976   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:58:38.241025   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:05:04.374044719 +0000 UTC m=+5327.589548530
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25: (1.585072696s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo cat                              | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:05:05 UTC. --
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.532789818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705449905532777744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e08a018b-3ee6-4ae7-ba7d-7742d23884b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.533568921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=283091cf-fc05-4867-bca3-016a94c355da name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.533616783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=283091cf-fc05-4867-bca3-016a94c355da name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.533867479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=283091cf-fc05-4867-bca3-016a94c355da name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.576526651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b0448297-32fc-40a5-8b83-1ac71a8af3e0 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.576642075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b0448297-32fc-40a5-8b83-1ac71a8af3e0 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.578205701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=41018f02-76b8-4e27-8d97-69325d353bfe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.578582090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705449905578566716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=41018f02-76b8-4e27-8d97-69325d353bfe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.579118793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d773dbd-1d29-4d7b-bfbf-aa29dc9784ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.579196762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d773dbd-1d29-4d7b-bfbf-aa29dc9784ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.579360599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d773dbd-1d29-4d7b-bfbf-aa29dc9784ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.618309453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9f7497ba-7191-4a16-a556-db9ed5149704 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.618391225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9f7497ba-7191-4a16-a556-db9ed5149704 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.619812772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c7c212e6-6020-4b82-a103-1d841c87a532 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.620311786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705449905620296781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c7c212e6-6020-4b82-a103-1d841c87a532 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.621336626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cc9cf63-26b2-480e-82b7-058e8e778534 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.621404477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cc9cf63-26b2-480e-82b7-058e8e778534 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.621642382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cc9cf63-26b2-480e-82b7-058e8e778534 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.657147902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0fe0dedd-1fdb-4ad2-9d50-edbce1528bd4 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.657237092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0fe0dedd-1fdb-4ad2-9d50-edbce1528bd4 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.658574592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=77238016-1fa7-4f29-9dd0-92ffc5322f44 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.659081100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705449905659063107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=77238016-1fa7-4f29-9dd0-92ffc5322f44 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.660059109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=262197ef-01aa-4810-b349-3a2350e7982e name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.660126419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=262197ef-01aa-4810-b349-3a2350e7982e name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:05:05 old-k8s-version-771669 crio[714]: time="2024-01-17 00:05:05.660310927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=262197ef-01aa-4810-b349-3a2350e7982e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9459eba4162be       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   69a4cbb576850       busybox
	21a6dceb568ad       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   861a780833a2d       coredns-5644d7b6d9-9njqp
	5cbd938949134       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner       0                   51a17462d718a       storage-provisioner
	a613a4e4ddfe3       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   9e58ca8a29986       kube-proxy-9ghls
	7a937abd3b903       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   453bb94b5ee72       etcd-old-k8s-version-771669
	f4999acc2d6d7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   5f2e4e8fdc564       kube-apiserver-old-k8s-version-771669
	911f813160b15       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   e3d35b7aba356       kube-controller-manager-old-k8s-version-771669
	494f74041efd3       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   13d26353ba2d4       kube-scheduler-old-k8s-version-771669
	
	
	==> coredns [21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942] <==
	E0116 23:46:10.187359       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.193152       1 trace.go:82] Trace[785493325]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.186709268 +0000 UTC m=+0.081907198) (total time: 30.006404152s):
	Trace[785493325]: [30.006404152s] [30.006404152s] END
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.200490       1 trace.go:82] Trace[1301817211]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.19394028 +0000 UTC m=+0.089138224) (total time: 30.006532947s):
	Trace[1301817211]: [30.006532947s] [30.006532947s] END
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-16T23:46:15.289Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2024-01-16T23:46:15.321Z [INFO] 127.0.0.1:57441 - 44193 "HINFO IN 1365412375578555759.7322076794870044211. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008071628s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-16T23:55:55.993Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-16T23:55:55.993Z [INFO] CoreDNS-1.6.2
	2024-01-16T23:55:55.993Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T23:55:56.003Z [INFO] 127.0.0.1:59166 - 17216 "HINFO IN 9081841845838306910.8543492278547947642. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009686681s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-771669
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-771669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=old-k8s-version-771669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_45_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:04:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:04:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:04:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:04:22 +0000   Tue, 16 Jan 2024 23:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    old-k8s-version-771669
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0599c334d1574c44852cd606008f4484
	 System UUID:                0599c334-d157-4c44-852c-d606008f4484
	 Boot ID:                    6a822f71-f4d9-4098-87a2-3d00d7bd6120
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-9njqp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                etcd-old-k8s-version-771669                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-apiserver-old-k8s-version-771669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-771669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                kube-proxy-9ghls                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-scheduler-old-k8s-version-771669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                metrics-server-74d5856cc6-gj4zn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m58s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	  Normal  Starting                 9m22s                  kubelet, old-k8s-version-771669     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x7 over 9m22s)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x8 over 9m22s)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet, old-k8s-version-771669     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m12s                  kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074468] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.864255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569582] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135010] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.485542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.831981] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.125426] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.166674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.156891] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.236650] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +18.743957] systemd-fstab-generator[1024]: Ignoring "noauto" for root device
	[  +0.411438] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan16 23:56] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174] <==
	2024-01-16 23:55:46.458827 I | etcdserver: heartbeat = 100ms
	2024-01-16 23:55:46.458841 I | etcdserver: election = 1000ms
	2024-01-16 23:55:46.458854 I | etcdserver: snapshot count = 10000
	2024-01-16 23:55:46.458873 I | etcdserver: advertise client URLs = https://192.168.72.114:2379
	2024-01-16 23:55:46.463616 I | etcdserver: restarting member d80e54998a205cf3 in cluster fe5d4cbbe2066f7 at commit index 527
	2024-01-16 23:55:46.463912 I | raft: d80e54998a205cf3 became follower at term 2
	2024-01-16 23:55:46.463954 I | raft: newRaft d80e54998a205cf3 [peers: [], term: 2, commit: 527, applied: 0, lastindex: 527, lastterm: 2]
	2024-01-16 23:55:46.471794 W | auth: simple token is not cryptographically signed
	2024-01-16 23:55:46.474478 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 23:55:46.476050 I | etcdserver/membership: added member d80e54998a205cf3 [https://192.168.72.114:2380] to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:46.476228 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 23:55:46.476294 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 23:55:46.476369 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 23:55:46.476491 I | embed: listening for metrics on http://192.168.72.114:2381
	2024-01-16 23:55:46.477296 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 23:55:48.264496 I | raft: d80e54998a205cf3 is starting a new election at term 2
	2024-01-16 23:55:48.264548 I | raft: d80e54998a205cf3 became candidate at term 3
	2024-01-16 23:55:48.264567 I | raft: d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.264578 I | raft: d80e54998a205cf3 became leader at term 3
	2024-01-16 23:55:48.264584 I | raft: raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.266381 I | etcdserver: published {Name:old-k8s-version-771669 ClientURLs:[https://192.168.72.114:2379]} to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:48.266872 I | embed: ready to serve client requests
	2024-01-16 23:55:48.267138 I | embed: ready to serve client requests
	2024-01-16 23:55:48.268857 I | embed: serving client requests on 192.168.72.114:2379
	2024-01-16 23:55:48.272176 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 00:05:06 up 9 min,  0 users,  load average: 0.06, 0.17, 0.12
	Linux old-k8s-version-771669 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877] <==
	I0116 23:56:53.263539       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 23:56:53.263719       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 23:56:53.263811       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 23:56:53.263833       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 23:58:53.264279       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 23:58:53.264388       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 23:58:53.264456       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 23:58:53.264463       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:00:52.563887       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:00:52.564240       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:00:52.564352       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:00:52.564376       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:01:52.565786       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:01:52.565953       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:01:52.566228       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:01:52.566238       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:03:52.566585       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:03:52.566696       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:03:52.566793       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:03:52.566804       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f] <==
	E0116 23:58:39.621526       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 23:58:50.456512       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 23:59:09.873763       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 23:59:22.458610       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 23:59:40.126248       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 23:59:54.461316       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:00:10.378630       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:00:26.463419       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:00:40.630807       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:00:58.465915       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:01:10.883148       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:01:30.468296       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:01:41.135146       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:02:02.470560       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:02:11.387148       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:02:34.472608       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:02:41.639600       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:03:06.474556       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:03:11.891581       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:03:38.476603       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:03:42.143541       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:04:10.478531       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:04:12.395270       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:04:42.480426       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:04:42.647190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7] <==
	W0116 23:45:41.007361       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:45:41.016329       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:45:41.016352       1 server_others.go:149] Using iptables Proxier.
	I0116 23:45:41.016667       1 server.go:529] Version: v1.16.0
	I0116 23:45:41.018410       1 config.go:131] Starting endpoints config controller
	I0116 23:45:41.024018       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:45:41.018730       1 config.go:313] Starting service config controller
	I0116 23:45:41.024397       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:45:41.124802       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:45:41.125007       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0116 23:55:53.969591       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:55:53.981521       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:55:53.981589       1 server_others.go:149] Using iptables Proxier.
	I0116 23:55:53.982391       1 server.go:529] Version: v1.16.0
	I0116 23:55:53.983881       1 config.go:313] Starting service config controller
	I0116 23:55:53.983929       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:55:53.984039       1 config.go:131] Starting endpoints config controller
	I0116 23:55:53.984056       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:55:54.084183       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:55:54.084427       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d] <==
	E0116 23:45:19.290133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:45:19.293479       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 23:45:19.294843       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 23:45:19.296276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:45:19.297284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 23:45:19.302219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:45:19.306970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:45:19.307150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.307930       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.308102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:55:45.888159       1 serving.go:319] Generated self-signed cert in-memory
	W0116 23:55:51.429069       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:55:51.429295       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:55:51.429326       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:55:51.429407       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:55:51.479301       1 server.go:143] Version: v1.16.0
	I0116 23:55:51.479424       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0116 23:55:51.496560       1 authorization.go:47] Authorization is disabled
	W0116 23:55:51.496594       1 authentication.go:79] Authentication is disabled
	I0116 23:55:51.496610       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 23:55:51.497402       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 23:55:51.544869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:55:51.545090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:05:06 UTC. --
	Jan 17 00:00:20 old-k8s-version-771669 kubelet[1030]: E0117 00:00:20.444404    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:00:33 old-k8s-version-771669 kubelet[1030]: E0117 00:00:33.445056    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:00:43 old-k8s-version-771669 kubelet[1030]: E0117 00:00:43.516268    1030 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 17 00:00:44 old-k8s-version-771669 kubelet[1030]: E0117 00:00:44.444044    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:00:59 old-k8s-version-771669 kubelet[1030]: E0117 00:00:59.449618    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:01:13 old-k8s-version-771669 kubelet[1030]: E0117 00:01:13.444814    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:01:27 old-k8s-version-771669 kubelet[1030]: E0117 00:01:27.444663    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:01:41 old-k8s-version-771669 kubelet[1030]: E0117 00:01:41.460397    1030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:01:41 old-k8s-version-771669 kubelet[1030]: E0117 00:01:41.460472    1030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:01:41 old-k8s-version-771669 kubelet[1030]: E0117 00:01:41.460523    1030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:01:41 old-k8s-version-771669 kubelet[1030]: E0117 00:01:41.460550    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 17 00:01:55 old-k8s-version-771669 kubelet[1030]: E0117 00:01:55.444772    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:02:08 old-k8s-version-771669 kubelet[1030]: E0117 00:02:08.445183    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:02:23 old-k8s-version-771669 kubelet[1030]: E0117 00:02:23.444859    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:02:36 old-k8s-version-771669 kubelet[1030]: E0117 00:02:36.444550    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:02:51 old-k8s-version-771669 kubelet[1030]: E0117 00:02:51.444719    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:03:04 old-k8s-version-771669 kubelet[1030]: E0117 00:03:04.444771    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:03:19 old-k8s-version-771669 kubelet[1030]: E0117 00:03:19.445378    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:03:34 old-k8s-version-771669 kubelet[1030]: E0117 00:03:34.444396    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:03:48 old-k8s-version-771669 kubelet[1030]: E0117 00:03:48.444531    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:04:00 old-k8s-version-771669 kubelet[1030]: E0117 00:04:00.445200    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:04:15 old-k8s-version-771669 kubelet[1030]: E0117 00:04:15.444330    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:04:29 old-k8s-version-771669 kubelet[1030]: E0117 00:04:29.444595    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:04:44 old-k8s-version-771669 kubelet[1030]: E0117 00:04:44.444378    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:04:55 old-k8s-version-771669 kubelet[1030]: E0117 00:04:55.444512    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3] <==
	I0116 23:45:41.784762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:45:41.799195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:45:41.799369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:45:41.808193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:45:41.809025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:45:41.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4 became leader
	I0116 23:45:41.909835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:55:55.015814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:55.084172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:55.084535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:56:12.492253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:56:12.492881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	I0116 23:56:12.493615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0 became leader
	I0116 23:56:12.593934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-771669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-gj4zn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1 (66.187025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-gj4zn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 23:59:41.171712   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:59:55.290011   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0117 00:00:10.014821   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0117 00:00:47.136381   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0117 00:01:00.968446   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0117 00:02:23.603789   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0117 00:02:24.015514   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0117 00:02:32.960391   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0117 00:03:19.621674   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0117 00:03:31.442330   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0117 00:03:38.240610   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0117 00:03:46.648922   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0117 00:03:56.005822   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085322 -n no-preload-085322
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:08:21.401419562 +0000 UTC m=+5524.616923367
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-085322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-085322 logs -n 25: (1.612710438s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo cat                              | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:14 UTC, ends at Wed 2024-01-17 00:08:22 UTC. --
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.513419634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450102513399778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=24fbd008-9f18-4d7f-9de9-00ed076b4e71 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.514257860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=788e1fcd-a85e-49c4-ac02-0848ca75d264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.514325406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=788e1fcd-a85e-49c4-ac02-0848ca75d264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.514497688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=788e1fcd-a85e-49c4-ac02-0848ca75d264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.556476014Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4f411001-2116-4bd9-9408-41f956a7e663 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.556562361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4f411001-2116-4bd9-9408-41f956a7e663 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.557724458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0ddf9863-125c-4dfa-b5fc-5e9cfefbeabe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.558102247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450102558087568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=0ddf9863-125c-4dfa-b5fc-5e9cfefbeabe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.558994531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a226391-1651-4107-9c33-12bceb0c655c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.559067562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a226391-1651-4107-9c33-12bceb0c655c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.559312109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a226391-1651-4107-9c33-12bceb0c655c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.599191757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=67913b22-75f3-4561-ab09-c25c9a887f95 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.599270405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=67913b22-75f3-4561-ab09-c25c9a887f95 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.600905941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=de18d9ce-d5f2-4588-8411-59ba0d6aad59 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.601327059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450102601311337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=de18d9ce-d5f2-4588-8411-59ba0d6aad59 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.601861728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b79a7dd-0cf6-46b2-a0ef-1c814d682dbe name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.601934349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b79a7dd-0cf6-46b2-a0ef-1c814d682dbe name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.602204662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b79a7dd-0cf6-46b2-a0ef-1c814d682dbe name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.635688146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0d1d5ddf-e133-45fc-8cb3-98192dfe8fb8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.635745934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0d1d5ddf-e133-45fc-8cb3-98192dfe8fb8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.637430591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ec6ae46e-99c6-4f63-a2b1-10d072940f2e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.637735055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450102637720116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ec6ae46e-99c6-4f63-a2b1-10d072940f2e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.638461721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b992ed36-3333-46ba-b671-978d6bf3844f name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.638532185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b992ed36-3333-46ba-b671-978d6bf3844f name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:08:22 no-preload-085322 crio[720]: time="2024-01-17 00:08:22.638715049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b992ed36-3333-46ba-b671-978d6bf3844f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60416d35ab032       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2e2d4cf3252ef       storage-provisioner
	f2010450e59e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   63a3f258785a1       busybox
	77f52399b3a56       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   14b2a7aea6c5f       coredns-76f75df574-ptq95
	d53f5dc02719d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2e2d4cf3252ef       storage-provisioner
	beec9bf02a170       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   75aac7f7149bc       kube-proxy-64z5c
	3ae748115585f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   2949cc2da8fdc       etcd-no-preload-085322
	307723cb0d2c3       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   19a8a31f45f0c       kube-scheduler-no-preload-085322
	bf6b71506f3a6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   1f366dc6c696e       kube-apiserver-no-preload-085322
	fa4073a76d415       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   25807c34c111e       kube-controller-manager-no-preload-085322
	
	
	==> coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41958 - 7134 "HINFO IN 4312849831828573737.8230304474284747680. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009782799s
	
	
	==> describe nodes <==
	Name:               no-preload-085322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-085322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=no-preload-085322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_46_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:46:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-085322
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:08:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:05:38 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:05:38 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:05:38 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:05:38 +0000   Tue, 16 Jan 2024 23:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.183
	  Hostname:    no-preload-085322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e13f9fe9e7b4ec58d148ab9d15bf3f4
	  System UUID:                5e13f9fe-9e7b-4ec5-8d14-8ab9d15bf3f4
	  Boot ID:                    3ed7d3dd-fd9a-4acb-b2fd-65c880f13c81
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-ptq95                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-085322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-085322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-085322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-64z5c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-085322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-xbr22              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node no-preload-085322 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-085322 event: Registered Node no-preload-085322 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-085322 event: Registered Node no-preload-085322 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.275777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.689063] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136639] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.353841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.270201] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.101382] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.141693] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.110008] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.221934] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +28.198260] systemd-fstab-generator[1332]: Ignoring "noauto" for root device
	[Jan16 23:55] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] <==
	{"level":"warn","ts":"2024-01-16T23:55:31.89531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"399.207038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" ","response":"range_response_count:1 size:826"}
	{"level":"info","ts":"2024-01-16T23:55:31.895424Z","caller":"traceutil/trace.go:171","msg":"trace[824311049] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7; range_end:; response_count:1; response_revision:601; }","duration":"399.28577ms","start":"2024-01-16T23:55:31.496085Z","end":"2024-01-16T23:55:31.895371Z","steps":["trace[824311049] 'agreement among raft nodes before linearized reading'  (duration: 399.156651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:31.895462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.496072Z","time spent":"399.379946ms","remote":"127.0.0.1:58816","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":848,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" "}
	{"level":"warn","ts":"2024-01-16T23:55:31.895572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"920.979077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" ","response":"range_response_count:1 size:4238"}
	{"level":"info","ts":"2024-01-16T23:55:31.895626Z","caller":"traceutil/trace.go:171","msg":"trace[1873008462] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22; range_end:; response_count:1; response_revision:601; }","duration":"921.039037ms","start":"2024-01-16T23:55:30.97458Z","end":"2024-01-16T23:55:31.895619Z","steps":["trace[1873008462] 'agreement among raft nodes before linearized reading'  (duration: 920.900848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:31.895667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:30.974567Z","time spent":"921.093797ms","remote":"127.0.0.1:58840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4260,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.761737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"599.40424ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13371905363659101707 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" mod_revision:554 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" value_size:716 lease:4148533326804325494 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-16T23:55:32.761869Z","caller":"traceutil/trace.go:171","msg":"trace[1147771782] linearizableReadLoop","detail":"{readStateIndex:646; appliedIndex:645; }","duration":"860.586663ms","start":"2024-01-16T23:55:31.901269Z","end":"2024-01-16T23:55:32.761856Z","steps":["trace[1147771782] 'read index received'  (duration: 260.933062ms)","trace[1147771782] 'applied index is now lower than readState.Index'  (duration: 599.652245ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T23:55:32.761937Z","caller":"traceutil/trace.go:171","msg":"trace[1635984485] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"863.927854ms","start":"2024-01-16T23:55:31.898Z","end":"2024-01-16T23:55:32.761928Z","steps":["trace[1635984485] 'process raft request'  (duration: 264.090722ms)","trace[1635984485] 'compare'  (duration: 598.974283ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T23:55:32.762004Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.897988Z","time spent":"863.973475ms","remote":"127.0.0.1:58816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":811,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" mod_revision:554 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" value_size:716 lease:4148533326804325494 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-xbr22.17aaf91b4cbd32f7\" > >"}
	{"level":"warn","ts":"2024-01-16T23:55:32.762306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"861.048268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-085322\" ","response":"range_response_count:1 size:4693"}
	{"level":"info","ts":"2024-01-16T23:55:32.762339Z","caller":"traceutil/trace.go:171","msg":"trace[408443954] range","detail":"{range_begin:/registry/minions/no-preload-085322; range_end:; response_count:1; response_revision:602; }","duration":"861.08209ms","start":"2024-01-16T23:55:31.901246Z","end":"2024-01-16T23:55:32.762328Z","steps":["trace[408443954] 'agreement among raft nodes before linearized reading'  (duration: 860.943688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.762361Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.901233Z","time spent":"861.122988ms","remote":"127.0.0.1:58838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4715,"request content":"key:\"/registry/minions/no-preload-085322\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"860.761475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" ","response":"range_response_count:1 size:4238"}
	{"level":"info","ts":"2024-01-16T23:55:32.762514Z","caller":"traceutil/trace.go:171","msg":"trace[424613040] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22; range_end:; response_count:1; response_revision:602; }","duration":"860.779365ms","start":"2024-01-16T23:55:31.901729Z","end":"2024-01-16T23:55:32.762508Z","steps":["trace[424613040] 'agreement among raft nodes before linearized reading'  (duration: 860.743395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.762548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.901721Z","time spent":"860.822422ms","remote":"127.0.0.1:58840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4260,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.050206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T23:55:32.76282Z","caller":"traceutil/trace.go:171","msg":"trace[1739164744] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:602; }","duration":"376.0783ms","start":"2024-01-16T23:55:32.386733Z","end":"2024-01-16T23:55:32.762811Z","steps":["trace[1739164744] 'agreement among raft nodes before linearized reading'  (duration: 376.036004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.762838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:32.386651Z","time spent":"376.183371ms","remote":"127.0.0.1:58792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"846.939988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:609"}
	{"level":"info","ts":"2024-01-16T23:55:32.763007Z","caller":"traceutil/trace.go:171","msg":"trace[184501532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:602; }","duration":"846.994352ms","start":"2024-01-16T23:55:31.916006Z","end":"2024-01-16T23:55:32.763Z","steps":["trace[184501532] 'agreement among raft nodes before linearized reading'  (duration: 846.949866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.763027Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.915995Z","time spent":"847.026871ms","remote":"127.0.0.1:58836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":631,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-01-17T00:04:55.715074Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-01-17T00:04:55.717623Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":834,"took":"1.945748ms","hash":3413872945}
	{"level":"info","ts":"2024-01-17T00:04:55.717737Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3413872945,"revision":834,"compact-revision":-1}
	
	
	==> kernel <==
	 00:08:22 up 14 min,  0 users,  load average: 0.30, 0.21, 0.17
	Linux no-preload-085322 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] <==
	I0117 00:02:57.980861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:04:56.981068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:04:56.981940       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0117 00:04:57.982977       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:04:57.983105       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:04:57.983118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:04:57.983385       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:04:57.983332       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:04:57.985502       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:05:57.983503       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:05:57.983722       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:05:57.983758       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:05:57.986211       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:05:57.986323       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:05:57.986333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:07:57.984958       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:07:57.985074       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:07:57.985096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:07:57.987353       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:07:57.987486       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:07:57.987542       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] <==
	I0117 00:02:40.501202       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:03:09.918704       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:03:10.512313       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:03:39.924422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:03:40.520976       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:04:09.930777       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:04:10.529474       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:04:39.936488       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:04:40.537715       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:05:09.941976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:05:10.546092       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:05:39.947458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:05:40.555108       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:05:52.511428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="373.385µs"
	I0117 00:06:05.510858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="147.117µs"
	E0117 00:06:09.954692       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:06:10.564287       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:06:39.960117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:06:40.573040       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:07:09.965531       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:07:10.580777       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:07:39.972045       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:07:40.588946       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:08:09.978386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:08:10.597304       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] <==
	I0116 23:54:58.721175       1 server_others.go:72] "Using iptables proxy"
	I0116 23:54:58.738947       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.183"]
	I0116 23:54:58.801055       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0116 23:54:58.801091       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 23:54:58.801104       1 server_others.go:168] "Using iptables Proxier"
	I0116 23:54:58.803627       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 23:54:58.803902       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0116 23:54:58.803933       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:54:58.805905       1 config.go:188] "Starting service config controller"
	I0116 23:54:58.805944       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 23:54:58.805987       1 config.go:97] "Starting endpoint slice config controller"
	I0116 23:54:58.806012       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 23:54:58.809702       1 config.go:315] "Starting node config controller"
	I0116 23:54:58.809733       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 23:54:58.906195       1 shared_informer.go:318] Caches are synced for service config
	I0116 23:54:58.906129       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 23:54:58.909871       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] <==
	I0116 23:54:54.873211       1 serving.go:380] Generated self-signed cert in-memory
	W0116 23:54:56.939550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:54:56.939690       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:54:56.939722       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:54:56.939807       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:54:56.992229       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0116 23:54:56.992267       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:54:56.993774       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 23:54:56.993822       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 23:54:56.994670       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 23:54:56.997464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 23:54:57.094654       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:14 UTC, ends at Wed 2024-01-17 00:08:23 UTC. --
	Jan 17 00:05:51 no-preload-085322 kubelet[1338]: E0117 00:05:51.515757    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:05:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:05:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:05:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:05:52 no-preload-085322 kubelet[1338]: E0117 00:05:52.492414    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:06:05 no-preload-085322 kubelet[1338]: E0117 00:06:05.493493    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:06:17 no-preload-085322 kubelet[1338]: E0117 00:06:17.491003    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:06:28 no-preload-085322 kubelet[1338]: E0117 00:06:28.492102    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:06:43 no-preload-085322 kubelet[1338]: E0117 00:06:43.492399    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:06:51 no-preload-085322 kubelet[1338]: E0117 00:06:51.515866    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:06:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:06:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:06:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:06:56 no-preload-085322 kubelet[1338]: E0117 00:06:56.491444    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:07:07 no-preload-085322 kubelet[1338]: E0117 00:07:07.492764    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:07:18 no-preload-085322 kubelet[1338]: E0117 00:07:18.492329    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:07:29 no-preload-085322 kubelet[1338]: E0117 00:07:29.492583    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:07:42 no-preload-085322 kubelet[1338]: E0117 00:07:42.491352    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:07:51 no-preload-085322 kubelet[1338]: E0117 00:07:51.517301    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:07:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:07:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:07:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:07:54 no-preload-085322 kubelet[1338]: E0117 00:07:54.491358    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:08:09 no-preload-085322 kubelet[1338]: E0117 00:08:09.494635    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:08:21 no-preload-085322 kubelet[1338]: E0117 00:08:21.492614    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	
	
	==> storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] <==
	I0116 23:55:29.854869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:29.874193       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:29.874272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:55:29.896125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:55:29.897624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa!
	I0116 23:55:29.896386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16b5d283-67d6-42b9-93d6-48a37a448a5d", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa became leader
	I0116 23:55:29.998040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa!
	
	
	==> storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] <==
	I0116 23:54:58.703944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 23:55:28.706701       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085322 -n no-preload-085322
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-085322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xbr22
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22: exit status 1 (70.155597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xbr22" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-837871 -n embed-certs-837871
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:13:11.473807909 +0000 UTC m=+5814.689311737
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-837871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-837871 logs -n 25: (1.558457894s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo cat                              | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:33 UTC, ends at Wed 2024-01-17 00:13:12 UTC. --
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.530163246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=50104277-f994-4194-99d7-e49feff84da9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.530395519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=50104277-f994-4194-99d7-e49feff84da9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.572378812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=456d58c0-61c2-411b-b450-7cb16a5a0321 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.572479966Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=456d58c0-61c2-411b-b450-7cb16a5a0321 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.573005484Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=b83c9143-f17d-4660-aa46-971df14e0282 name=/runtime.v1.RuntimeService/Status
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.573172431Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b83c9143-f17d-4660-aa46-971df14e0282 name=/runtime.v1.RuntimeService/Status
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.573886688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ddf19247-c226-4795-ab5a-144d896d421f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.574389089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450392574368593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ddf19247-c226-4795-ab5a-144d896d421f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.575268525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39e995fc-54be-49be-876c-0352f557ea8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.575334929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39e995fc-54be-49be-876c-0352f557ea8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.575534777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39e995fc-54be-49be-876c-0352f557ea8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.615057285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7aea7f2d-8e37-4b0f-a1be-7f0da7e3894f name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.615210543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7aea7f2d-8e37-4b0f-a1be-7f0da7e3894f name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.616332708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab6c9f2f-c38d-42b9-9332-226d7968fe39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.616758418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450392616744522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ab6c9f2f-c38d-42b9-9332-226d7968fe39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.617337713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=254da49b-b736-4b9e-b995-f6b4b3cd92fa name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.617382738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=254da49b-b736-4b9e-b995-f6b4b3cd92fa name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.617566218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=254da49b-b736-4b9e-b995-f6b4b3cd92fa name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.657702879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a6da109e-e1e0-409c-aef1-177aa3ed6353 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.657824680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a6da109e-e1e0-409c-aef1-177aa3ed6353 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.660700745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9607f3ad-f214-496c-9b6e-10bcd9a622a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.661550268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450392661528728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9607f3ad-f214-496c-9b6e-10bcd9a622a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.662574739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=531c4331-2f8f-4a9a-b18f-64afa3b3bba3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.662661515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=531c4331-2f8f-4a9a-b18f-64afa3b3bba3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:12 embed-certs-837871 crio[720]: time="2024-01-17 00:13:12.662891683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=531c4331-2f8f-4a9a-b18f-64afa3b3bba3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	304b75257b98a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   b4b8bdb35468a       storage-provisioner
	85a871eaadf52       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   71faeee9f3dba       kube-proxy-n2l6s
	fbf799dc2641e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   fafe333a6c959       coredns-5dd5756b68-52xk7
	724ffd940ff03       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   0dc299c074c74       kube-scheduler-embed-certs-837871
	c4895b3e5cab3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   31ff8832d6a32       etcd-embed-certs-837871
	caa2304d7d208       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   f0518e04f9bf1       kube-controller-manager-embed-certs-837871
	d76dfa44d72e3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   ff56eb5c74690       kube-apiserver-embed-certs-837871
	
	
	==> coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35152 - 63620 "HINFO IN 2176552816251847159.3970859914954375329. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009871044s
	
	
	==> describe nodes <==
	Name:               embed-certs-837871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-837871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=embed-certs-837871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:59:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-837871
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:13:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:10:14 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:10:14 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:10:14 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:10:14 +0000   Tue, 16 Jan 2024 23:59:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    embed-certs-837871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bfd53b8953e40fd928bb56312c38f54
	  System UUID:                3bfd53b8-953e-40fd-928b-b56312c38f54
	  Boot ID:                    4f31bcd8-c63c-45df-a685-5ed341fe0ce4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-52xk7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-837871                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-837871             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-837871    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-n2l6s                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-837871             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-6rsbl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node embed-certs-837871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node embed-certs-837871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node embed-certs-837871 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node embed-certs-837871 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node embed-certs-837871 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node embed-certs-837871 event: Registered Node embed-certs-837871 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063248] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.353809] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.958677] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133426] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.417545] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.377031] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.106055] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.130034] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.116010] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.203868] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[ +16.825392] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[Jan16 23:55] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 23:59] systemd-fstab-generator[3481]: Ignoring "noauto" for root device
	[  +9.274995] systemd-fstab-generator[3839]: Ignoring "noauto" for root device
	[ +13.201502] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] <==
	{"level":"info","ts":"2024-01-16T23:59:36.248711Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9e3e2863ac888927","initial-advertise-peer-urls":["https://192.168.39.226:2380"],"listen-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.226:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T23:59:36.248909Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T23:59:36.248265Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2024-01-16T23:59:36.249194Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2024-01-16T23:59:37.008487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:37.008596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:37.008644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 received MsgPreVoteResp from 9e3e2863ac888927 at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:37.008678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:37.008702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 received MsgVoteResp from 9e3e2863ac888927 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:37.008729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:37.008755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9e3e2863ac888927 elected leader 9e3e2863ac888927 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:37.01035Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9e3e2863ac888927","local-member-attributes":"{Name:embed-certs-837871 ClientURLs:[https://192.168.39.226:2379]}","request-path":"/0/members/9e3e2863ac888927/attributes","cluster-id":"5e6abf1d35eec4c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T23:59:37.010572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:37.010717Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:37.0124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T23:59:37.012553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.226:2379"}
	{"level":"info","ts":"2024-01-16T23:59:37.012703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:37.012755Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:37.012744Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.029505Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5e6abf1d35eec4c5","local-member-id":"9e3e2863ac888927","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.02967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.032448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-17T00:09:37.047024Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-01-17T00:09:37.050244Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":679,"took":"2.531983ms","hash":582418373}
	{"level":"info","ts":"2024-01-17T00:09:37.050348Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":582418373,"revision":679,"compact-revision":-1}
	
	
	==> kernel <==
	 00:13:13 up 18 min,  0 users,  load average: 0.02, 0.16, 0.22
	Linux embed-certs-837871 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] <==
	I0117 00:09:38.457601       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:09:39.457720       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:09:39.457779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:09:39.458028       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:09:39.458063       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:09:39.457884       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:09:39.459451       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:10:38.356674       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:10:39.459209       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:10:39.459531       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:39.459636       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:10:39.459773       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:10:39.459817       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:10:39.460770       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:11:38.356916       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0117 00:12:38.356605       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:12:39.460774       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:12:39.460897       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:12:39.460951       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:12:39.460984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:12:39.461203       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:12:39.462171       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] <==
	I0117 00:07:24.245714       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:07:53.755496       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:07:54.255368       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:08:23.765706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:08:24.265039       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:08:53.772740       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:08:54.274789       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:09:23.780698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:09:24.284945       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:09:53.787174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:09:54.295445       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:23.793046       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:24.304227       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:10:42.182910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="267.297µs"
	E0117 00:10:53.799772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:54.319617       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:10:55.173174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.442µs"
	E0117 00:11:23.806083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:24.327094       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:11:53.813050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:54.337298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:23.819014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:24.346563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:53.825806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:54.362887       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] <==
	I0116 23:59:57.991179       1 server_others.go:69] "Using iptables proxy"
	I0116 23:59:58.114054       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I0116 23:59:58.356357       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 23:59:58.375293       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 23:59:58.410754       1 server_others.go:152] "Using iptables Proxier"
	I0116 23:59:58.412039       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 23:59:58.412497       1 server.go:846] "Version info" version="v1.28.4"
	I0116 23:59:58.412540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:59:58.415063       1 config.go:315] "Starting node config controller"
	I0116 23:59:58.415252       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 23:59:58.416222       1 config.go:97] "Starting endpoint slice config controller"
	I0116 23:59:58.416338       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 23:59:58.416536       1 config.go:188] "Starting service config controller"
	I0116 23:59:58.416572       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 23:59:58.516051       1 shared_informer.go:318] Caches are synced for node config
	I0116 23:59:58.517252       1 shared_informer.go:318] Caches are synced for service config
	I0116 23:59:58.517271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] <==
	W0116 23:59:38.513758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:59:38.513771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 23:59:38.517285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:59:38.517326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 23:59:38.517398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:38.517411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.327700       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 23:59:39.327815       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:59:39.494077       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:59:39.494204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 23:59:39.516347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:59:39.516371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 23:59:39.574954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:59:39.575065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 23:59:39.584388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:59:39.584488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 23:59:39.680231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:39.680346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.704526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:59:39.704647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 23:59:39.730575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:39.730710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.744943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 23:59:39.745062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:59:41.282961       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:33 UTC, ends at Wed 2024-01-17 00:13:13 UTC. --
	Jan 17 00:10:28 embed-certs-837871 kubelet[3846]: E0117 00:10:28.165792    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:10:42 embed-certs-837871 kubelet[3846]: E0117 00:10:42.152401    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:10:42 embed-certs-837871 kubelet[3846]: E0117 00:10:42.232841    3846 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:10:42 embed-certs-837871 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:10:42 embed-certs-837871 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:10:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:10:55 embed-certs-837871 kubelet[3846]: E0117 00:10:55.148735    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:11:09 embed-certs-837871 kubelet[3846]: E0117 00:11:09.149151    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:11:21 embed-certs-837871 kubelet[3846]: E0117 00:11:21.149622    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:11:36 embed-certs-837871 kubelet[3846]: E0117 00:11:36.150085    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:11:42 embed-certs-837871 kubelet[3846]: E0117 00:11:42.234629    3846 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:11:42 embed-certs-837871 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:11:42 embed-certs-837871 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:11:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:11:50 embed-certs-837871 kubelet[3846]: E0117 00:11:50.149192    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:12:04 embed-certs-837871 kubelet[3846]: E0117 00:12:04.150270    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:12:19 embed-certs-837871 kubelet[3846]: E0117 00:12:19.148751    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:12:30 embed-certs-837871 kubelet[3846]: E0117 00:12:30.149202    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:12:41 embed-certs-837871 kubelet[3846]: E0117 00:12:41.149032    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:12:42 embed-certs-837871 kubelet[3846]: E0117 00:12:42.232966    3846 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:12:42 embed-certs-837871 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:12:42 embed-certs-837871 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:12:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:12:53 embed-certs-837871 kubelet[3846]: E0117 00:12:53.148949    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:13:05 embed-certs-837871 kubelet[3846]: E0117 00:13:05.148922    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	
	
	==> storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] <==
	I0116 23:59:58.185029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:59:58.198963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:59:58.200559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:59:58.221321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:59:58.226878       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386!
	I0116 23:59:58.246319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfde273a-d420-49e4-987f-a4fcc5a0f676", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386 became leader
	I0116 23:59:58.328059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-837871 -n embed-certs-837871
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-837871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6rsbl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl: exit status 1 (67.380796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6rsbl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0117 00:04:41.171742   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0117 00:04:42.667053   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0117 00:04:54.492144   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0117 00:04:55.289955   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0117 00:05:01.286043   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:13:34.478000417 +0000 UTC m=+5837.693504215
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-967325 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-967325 logs -n 25: (1.606087836s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo cat                              | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:53 UTC, ends at Wed 2024-01-17 00:13:35 UTC. --
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.573552194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450415573536697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=99517e7a-49f3-4c65-9cba-aac3f31b529d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.574048866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5e722cc6-ca7e-41f1-995b-44a72604284b name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.574119929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5e722cc6-ca7e-41f1-995b-44a72604284b name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.574294662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5e722cc6-ca7e-41f1-995b-44a72604284b name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.615721872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f0242689-7101-468b-8f89-16eec35442d1 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.615825349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f0242689-7101-468b-8f89-16eec35442d1 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.616925493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c5459473-de4e-4197-95fd-83af8e3c3f7d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.617320619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450415617308193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c5459473-de4e-4197-95fd-83af8e3c3f7d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.617854758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7274f64c-e690-4063-8b77-fa7d020b030e name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.617924789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7274f64c-e690-4063-8b77-fa7d020b030e name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.618108509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7274f64c-e690-4063-8b77-fa7d020b030e name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.661714511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8b4c8e12-47db-467f-ab77-1e27671dc8a6 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.661795024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8b4c8e12-47db-467f-ab77-1e27671dc8a6 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.664279859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5f1ee7e7-984b-451b-862b-5fc387bb8606 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.664791488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450415664771710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5f1ee7e7-984b-451b-862b-5fc387bb8606 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.665374157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8f8c0f9-f2a8-4b8a-901e-8bbaab14b316 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.665430630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8f8c0f9-f2a8-4b8a-901e-8bbaab14b316 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.665587448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8f8c0f9-f2a8-4b8a-901e-8bbaab14b316 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.701961437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3d01dd78-777b-4065-81ab-3913f21e6821 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.702017708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3d01dd78-777b-4065-81ab-3913f21e6821 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.703143360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5209774b-2624-48e4-91f3-b111422c2136 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.703592418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450415703576876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5209774b-2624-48e4-91f3-b111422c2136 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.704269213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d8c56fe4-4dae-40d8-af30-4a8ef4a58798 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.704315391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d8c56fe4-4dae-40d8-af30-4a8ef4a58798 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:13:35 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:13:35.704483149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d8c56fe4-4dae-40d8-af30-4a8ef4a58798 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	284632eb250da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   e5c0d80d63fd6       storage-provisioner
	a7769a6a67bd2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   7867a4fdb5254       kube-proxy-2z6bl
	d54e67f6cfd4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   e0f0ee36f1cff       coredns-5dd5756b68-gtx6b
	1fc993cc983de       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   586832a72457c       etcd-default-k8s-diff-port-967325
	40ee2a17afa04       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   d928f1b0b9c40       kube-scheduler-default-k8s-diff-port-967325
	44c04220b559e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   f708ac268f096       kube-apiserver-default-k8s-diff-port-967325
	c733c24fe4cac       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   c7631b58cdd67       kube-controller-manager-default-k8s-diff-port-967325
	
	
	==> coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38319 - 12265 "HINFO IN 7363237114678645592.2750400025902809400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009449709s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-967325
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-967325
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=default-k8s-diff-port-967325
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jan 2024 00:00:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-967325
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:13:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:10:38 +0000   Wed, 17 Jan 2024 00:00:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.144
	  Hostname:    default-k8s-diff-port-967325
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f1ef02d86f34c74b49036f31e17dfdd
	  System UUID:                1f1ef02d-86f3-4c74-b490-36f31e17dfdd
	  Boot ID:                    7c4fb655-2a4b-4cbb-ab84-165a343482be
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gtx6b                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-967325                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-967325             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-967325    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-2z6bl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-967325             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-dqkll                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-967325 event: Registered Node default-k8s-diff-port-967325 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.427608] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.791055] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135828] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.447613] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 23:55] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.124605] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.183641] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.125289] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.259460] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.957176] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[ +19.327094] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 23:59] systemd-fstab-generator[3480]: Ignoring "noauto" for root device
	[Jan17 00:00] systemd-fstab-generator[3807]: Ignoring "noauto" for root device
	[ +13.096569] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] <==
	{"level":"info","ts":"2024-01-16T23:59:58.708539Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2e42f40dd5a31940","local-member-id":"e83b713187665a36","added-peer-id":"e83b713187665a36","added-peer-peer-urls":["https://192.168.61.144:2380"]}
	{"level":"info","ts":"2024-01-16T23:59:58.712794Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T23:59:58.71372Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T23:59:58.713756Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T23:59:59.346256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:59.346316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:59.346337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 received MsgPreVoteResp from e83b713187665a36 at term 1"}
	{"level":"info","ts":"2024-01-16T23:59:59.346349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 received MsgVoteResp from e83b713187665a36 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e83b713187665a36 elected leader e83b713187665a36 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.347909Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.349092Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e83b713187665a36","local-member-attributes":"{Name:default-k8s-diff-port-967325 ClientURLs:[https://192.168.61.144:2379]}","request-path":"/0/members/e83b713187665a36/attributes","cluster-id":"2e42f40dd5a31940","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T23:59:59.349413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:59.349983Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2e42f40dd5a31940","local-member-id":"e83b713187665a36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350161Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:59.350238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:59.350262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:59.35117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T23:59:59.358285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.144:2379"}
	{"level":"info","ts":"2024-01-17T00:09:59.387271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-01-17T00:09:59.390984Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":723,"took":"2.875702ms","hash":2448618449}
	{"level":"info","ts":"2024-01-17T00:09:59.391084Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2448618449,"revision":723,"compact-revision":-1}
	
	
	==> kernel <==
	 00:13:36 up 18 min,  0 users,  load average: 0.09, 0.13, 0.16
	Linux default-k8s-diff-port-967325 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] <==
	I0117 00:10:00.851468       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:10:01.851891       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:10:01.851925       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:10:01.852085       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:10:01.852122       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:10:01.852098       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:01.854235       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:11:00.743204       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:11:01.853075       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:11:01.853150       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:11:01.853159       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:11:01.854335       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:11:01.854418       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:11:01.854426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:12:00.743673       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0117 00:13:00.742997       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:13:01.853830       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:13:01.853990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:13:01.854043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:13:01.855129       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:13:01.855244       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:13:01.855278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] <==
	I0117 00:07:46.381321       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:08:15.901921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:08:16.391905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:08:45.907786       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:08:46.403731       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:09:15.914192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:09:16.412805       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:09:45.919554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:09:46.423105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:15.926011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:16.432793       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:45.932175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:46.442610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:11:12.308774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="374.046µs"
	E0117 00:11:15.941209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:16.451349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:11:24.305534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="124.499µs"
	E0117 00:11:45.948310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:46.461868       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:15.954358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:16.470022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:45.961065       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:46.478940       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:15.967840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:16.489959       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] <==
	I0117 00:00:20.941465       1 server_others.go:69] "Using iptables proxy"
	I0117 00:00:20.972575       1 node.go:141] Successfully retrieved node IP: 192.168.61.144
	I0117 00:00:21.049024       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0117 00:00:21.049063       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0117 00:00:21.051568       1 server_others.go:152] "Using iptables Proxier"
	I0117 00:00:21.051753       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0117 00:00:21.051928       1 server.go:846] "Version info" version="v1.28.4"
	I0117 00:00:21.051960       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0117 00:00:21.054580       1 config.go:188] "Starting service config controller"
	I0117 00:00:21.054960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0117 00:00:21.055074       1 config.go:97] "Starting endpoint slice config controller"
	I0117 00:00:21.055104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0117 00:00:21.059744       1 config.go:315] "Starting node config controller"
	I0117 00:00:21.059856       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0117 00:00:21.155995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0117 00:00:21.156109       1 shared_informer.go:318] Caches are synced for service config
	I0117 00:00:21.160112       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] <==
	W0117 00:00:00.891301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0117 00:00:00.891353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0117 00:00:00.891468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0117 00:00:00.891502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0117 00:00:01.761053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0117 00:00:01.761103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0117 00:00:01.762943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0117 00:00:01.763079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0117 00:00:01.813484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0117 00:00:01.813577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0117 00:00:01.829708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0117 00:00:01.829940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0117 00:00:01.871355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0117 00:00:01.871383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0117 00:00:01.925199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0117 00:00:01.925326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0117 00:00:02.084340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0117 00:00:02.084458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0117 00:00:02.179410       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0117 00:00:02.179548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0117 00:00:02.193972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0117 00:00:02.194021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0117 00:00:02.378459       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0117 00:00:02.378483       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0117 00:00:04.264916       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:53 UTC, ends at Wed 2024-01-17 00:13:36 UTC. --
	Jan 17 00:10:58 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:10:58.299912    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:11:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:04.301545    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:11:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:11:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:11:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:11:12 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:12.287019    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:11:24 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:24.286302    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:11:36 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:36.285493    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:11:48 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:48.286213    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:11:59 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:11:59.285007    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:12:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:12:04.301816    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:12:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:12:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:12:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:12:14 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:12:14.286234    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:12:25 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:12:25.285579    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:12:37 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:12:37.285935    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:12:52 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:12:52.285883    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:04.301606    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:13:05 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:05.286069    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:20 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:20.286938    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:32 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:32.287154    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	
	
	==> storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] <==
	I0117 00:00:20.998440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0117 00:00:21.010905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0117 00:00:21.010994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0117 00:00:21.022237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0117 00:00:21.023981       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3!
	I0117 00:00:21.029771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1de6f6eb-91f7-4996-afe5-42c5f34c038f", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3 became leader
	I0117 00:00:21.124929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dqkll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll: exit status 1 (66.721719ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dqkll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0117 00:05:10.014789   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0117 00:05:47.136414   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0117 00:06:00.968309   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0117 00:06:04.215707   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0117 00:06:18.335293   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0117 00:06:33.061068   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0117 00:07:23.603401   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0117 00:07:32.960510   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0117 00:08:19.621823   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:14:07.564961151 +0000 UTC m=+5870.780464951
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-771669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-771669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (58.967412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-771669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25: (1.630381672s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo cat                              | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:08 UTC. --
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.721629941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=29e998cd-2d10-4e4d-b68b-2ad0b35ad9c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.721823928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=29e998cd-2d10-4e4d-b68b-2ad0b35ad9c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.758620458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a4118bb3-d1d2-457e-825c-62c7569eed97 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.758700437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a4118bb3-d1d2-457e-825c-62c7569eed97 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.759603642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f75bbe2c-b3e1-4cbf-833e-a7a49230000a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.760149946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450448760133561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f75bbe2c-b3e1-4cbf-833e-a7a49230000a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.760595661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=141230a2-5e27-4573-ae7b-9f5627ecf2ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.760644216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=141230a2-5e27-4573-ae7b-9f5627ecf2ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.760850831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=141230a2-5e27-4573-ae7b-9f5627ecf2ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.780281284Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=46bb2b07-3c95-4192-8151-886239ec42ab name=/runtime.v1alpha2.RuntimeService/Status
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.780344359Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=46bb2b07-3c95-4192-8151-886239ec42ab name=/runtime.v1alpha2.RuntimeService/Status
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.799336467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6ae5b573-8b87-4d04-b1af-edea15df0815 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.799386913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6ae5b573-8b87-4d04-b1af-edea15df0815 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.800798814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3d026cab-2fa8-47ec-811f-6a565b95e33a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.801281247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450448801265338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3d026cab-2fa8-47ec-811f-6a565b95e33a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.802075452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=afa689ee-3825-453e-bee7-2d709985144c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.802122690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=afa689ee-3825-453e-bee7-2d709985144c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.802292940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=afa689ee-3825-453e-bee7-2d709985144c name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.835563774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4e579f57-b782-48df-9a7b-b83f7d4de604 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.835622841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4e579f57-b782-48df-9a7b-b83f7d4de604 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.837160975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=00f03925-3dc9-4541-aa93-6bc378462f02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.837541100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450448837526288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=00f03925-3dc9-4541-aa93-6bc378462f02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.838508076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a4f14bda-971f-4afe-aeca-0b10a8e252a1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.838557479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a4f14bda-971f-4afe-aeca-0b10a8e252a1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:08 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:08.838745247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a4f14bda-971f-4afe-aeca-0b10a8e252a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9459eba4162be       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   69a4cbb576850       busybox
	21a6dceb568ad       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   861a780833a2d       coredns-5644d7b6d9-9njqp
	5cbd938949134       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   51a17462d718a       storage-provisioner
	a613a4e4ddfe3       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   9e58ca8a29986       kube-proxy-9ghls
	7a937abd3b903       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   453bb94b5ee72       etcd-old-k8s-version-771669
	f4999acc2d6d7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   5f2e4e8fdc564       kube-apiserver-old-k8s-version-771669
	911f813160b15       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   e3d35b7aba356       kube-controller-manager-old-k8s-version-771669
	494f74041efd3       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   13d26353ba2d4       kube-scheduler-old-k8s-version-771669
	
	
	==> coredns [21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942] <==
	E0116 23:46:10.187359       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.193152       1 trace.go:82] Trace[785493325]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.186709268 +0000 UTC m=+0.081907198) (total time: 30.006404152s):
	Trace[785493325]: [30.006404152s] [30.006404152s] END
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.200490       1 trace.go:82] Trace[1301817211]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.19394028 +0000 UTC m=+0.089138224) (total time: 30.006532947s):
	Trace[1301817211]: [30.006532947s] [30.006532947s] END
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-16T23:46:15.289Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2024-01-16T23:46:15.321Z [INFO] 127.0.0.1:57441 - 44193 "HINFO IN 1365412375578555759.7322076794870044211. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008071628s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-16T23:55:55.993Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-16T23:55:55.993Z [INFO] CoreDNS-1.6.2
	2024-01-16T23:55:55.993Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T23:55:56.003Z [INFO] 127.0.0.1:59166 - 17216 "HINFO IN 9081841845838306910.8543492278547947642. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009686681s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-771669
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-771669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=old-k8s-version-771669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_45_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    old-k8s-version-771669
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0599c334d1574c44852cd606008f4484
	 System UUID:                0599c334-d157-4c44-852c-d606008f4484
	 Boot ID:                    6a822f71-f4d9-4098-87a2-3d00d7bd6120
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-9njqp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-771669                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-771669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-771669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-9ghls                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-771669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-gj4zn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-771669     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x7 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-771669     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074468] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.864255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569582] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135010] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.485542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.831981] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.125426] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.166674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.156891] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.236650] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +18.743957] systemd-fstab-generator[1024]: Ignoring "noauto" for root device
	[  +0.411438] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan16 23:56] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174] <==
	2024-01-16 23:55:46.463616 I | etcdserver: restarting member d80e54998a205cf3 in cluster fe5d4cbbe2066f7 at commit index 527
	2024-01-16 23:55:46.463912 I | raft: d80e54998a205cf3 became follower at term 2
	2024-01-16 23:55:46.463954 I | raft: newRaft d80e54998a205cf3 [peers: [], term: 2, commit: 527, applied: 0, lastindex: 527, lastterm: 2]
	2024-01-16 23:55:46.471794 W | auth: simple token is not cryptographically signed
	2024-01-16 23:55:46.474478 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 23:55:46.476050 I | etcdserver/membership: added member d80e54998a205cf3 [https://192.168.72.114:2380] to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:46.476228 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 23:55:46.476294 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 23:55:46.476369 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 23:55:46.476491 I | embed: listening for metrics on http://192.168.72.114:2381
	2024-01-16 23:55:46.477296 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 23:55:48.264496 I | raft: d80e54998a205cf3 is starting a new election at term 2
	2024-01-16 23:55:48.264548 I | raft: d80e54998a205cf3 became candidate at term 3
	2024-01-16 23:55:48.264567 I | raft: d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.264578 I | raft: d80e54998a205cf3 became leader at term 3
	2024-01-16 23:55:48.264584 I | raft: raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.266381 I | etcdserver: published {Name:old-k8s-version-771669 ClientURLs:[https://192.168.72.114:2379]} to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:48.266872 I | embed: ready to serve client requests
	2024-01-16 23:55:48.267138 I | embed: ready to serve client requests
	2024-01-16 23:55:48.268857 I | embed: serving client requests on 192.168.72.114:2379
	2024-01-16 23:55:48.272176 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-17 00:05:48.299555 I | mvcc: store.index: compact 831
	2024-01-17 00:05:48.301444 I | mvcc: finished scheduled compaction at 831 (took 1.48289ms)
	2024-01-17 00:10:48.307018 I | mvcc: store.index: compact 1049
	2024-01-17 00:10:48.309423 I | mvcc: finished scheduled compaction at 1049 (took 1.556943ms)
	
	
	==> kernel <==
	 00:14:09 up 19 min,  0 users,  load average: 0.10, 0.13, 0.10
	Linux old-k8s-version-771669 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877] <==
	I0117 00:06:52.568125       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:06:52.568324       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:06:52.568428       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:06:52.568460       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:08:52.568751       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:08:52.569130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:08:52.569216       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:08:52.569239       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:10:52.570364       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:10:52.570659       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:10:52.570748       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:52.570771       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:11:52.571161       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:11:52.571452       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:11:52.571520       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:11:52.571559       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:13:52.571887       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:13:52.572257       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:13:52.572431       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:13:52.572517       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f] <==
	E0117 00:07:44.159270       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:07:54.492892       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:14.411306       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:26.495091       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:44.663350       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:58.497544       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:14.915110       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:09:30.499632       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:45.167228       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:02.502151       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:15.419628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:34.504463       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:45.671634       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:06.506665       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:15.924066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:38.508658       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:46.176241       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:10.510374       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:16.428278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:42.512853       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:46.680268       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:14.515039       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:16.932502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:46.516781       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:47.184739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7] <==
	W0116 23:45:41.007361       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:45:41.016329       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:45:41.016352       1 server_others.go:149] Using iptables Proxier.
	I0116 23:45:41.016667       1 server.go:529] Version: v1.16.0
	I0116 23:45:41.018410       1 config.go:131] Starting endpoints config controller
	I0116 23:45:41.024018       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:45:41.018730       1 config.go:313] Starting service config controller
	I0116 23:45:41.024397       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:45:41.124802       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:45:41.125007       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0116 23:55:53.969591       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:55:53.981521       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:55:53.981589       1 server_others.go:149] Using iptables Proxier.
	I0116 23:55:53.982391       1 server.go:529] Version: v1.16.0
	I0116 23:55:53.983881       1 config.go:313] Starting service config controller
	I0116 23:55:53.983929       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:55:53.984039       1 config.go:131] Starting endpoints config controller
	I0116 23:55:53.984056       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:55:54.084183       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:55:54.084427       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d] <==
	E0116 23:45:19.290133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:45:19.293479       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 23:45:19.294843       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 23:45:19.296276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:45:19.297284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 23:45:19.302219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:45:19.306970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:45:19.307150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.307930       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.308102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:55:45.888159       1 serving.go:319] Generated self-signed cert in-memory
	W0116 23:55:51.429069       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:55:51.429295       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:55:51.429326       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:55:51.429407       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:55:51.479301       1 server.go:143] Version: v1.16.0
	I0116 23:55:51.479424       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0116 23:55:51.496560       1 authorization.go:47] Authorization is disabled
	W0116 23:55:51.496594       1 authentication.go:79] Authentication is disabled
	I0116 23:55:51.496610       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 23:55:51.497402       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 23:55:51.544869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:55:51.545090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:09 UTC. --
	Jan 17 00:09:41 old-k8s-version-771669 kubelet[1030]: E0117 00:09:41.444444    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:09:54 old-k8s-version-771669 kubelet[1030]: E0117 00:09:54.444474    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:06 old-k8s-version-771669 kubelet[1030]: E0117 00:10:06.444489    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:19 old-k8s-version-771669 kubelet[1030]: E0117 00:10:19.444124    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:32 old-k8s-version-771669 kubelet[1030]: E0117 00:10:32.444520    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:43 old-k8s-version-771669 kubelet[1030]: E0117 00:10:43.517317    1030 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 17 00:10:45 old-k8s-version-771669 kubelet[1030]: E0117 00:10:45.444419    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:57 old-k8s-version-771669 kubelet[1030]: E0117 00:10:57.446558    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:12 old-k8s-version-771669 kubelet[1030]: E0117 00:11:12.444116    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:23 old-k8s-version-771669 kubelet[1030]: E0117 00:11:23.449289    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:38 old-k8s-version-771669 kubelet[1030]: E0117 00:11:38.444251    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455296    1030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455380    1030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455430    1030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455461    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 17 00:12:03 old-k8s-version-771669 kubelet[1030]: E0117 00:12:03.444940    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:17 old-k8s-version-771669 kubelet[1030]: E0117 00:12:17.445881    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:30 old-k8s-version-771669 kubelet[1030]: E0117 00:12:30.445107    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:42 old-k8s-version-771669 kubelet[1030]: E0117 00:12:42.444153    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:53 old-k8s-version-771669 kubelet[1030]: E0117 00:12:53.445269    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:07 old-k8s-version-771669 kubelet[1030]: E0117 00:13:07.444539    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:21 old-k8s-version-771669 kubelet[1030]: E0117 00:13:21.444749    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:34 old-k8s-version-771669 kubelet[1030]: E0117 00:13:34.444779    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:47 old-k8s-version-771669 kubelet[1030]: E0117 00:13:47.445167    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:14:01 old-k8s-version-771669 kubelet[1030]: E0117 00:14:01.444446    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3] <==
	I0116 23:45:41.784762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:45:41.799195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:45:41.799369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:45:41.808193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:45:41.809025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:45:41.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4 became leader
	I0116 23:45:41.909835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:55:55.015814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:55.084172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:55.084535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:56:12.492253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:56:12.492881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	I0116 23:56:12.493615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0 became leader
	I0116 23:56:12.593934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-771669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-gj4zn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1 (66.824234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-gj4zn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (411.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0117 00:08:31.442458   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0117 00:08:38.241478   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0117 00:09:41.171554   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0117 00:09:55.289789   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0117 00:10:10.014815   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0117 00:10:47.136891   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0117 00:11:00.968392   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0117 00:12:23.603563   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0117 00:12:32.960102   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085322 -n no-preload-085322
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:15:13.35251125 +0000 UTC m=+5936.568015058
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-085322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-085322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.858µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-085322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-085322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-085322 logs -n 25: (1.262243066s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-771669 image                           | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	| start   | -p newest-cni-353558 --memory=2200 --alsologtostderr   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/17 00:14:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0117 00:14:17.379388   65526 out.go:296] Setting OutFile to fd 1 ...
	I0117 00:14:17.379677   65526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:14:17.379686   65526 out.go:309] Setting ErrFile to fd 2...
	I0117 00:14:17.379691   65526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:14:17.379892   65526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0117 00:14:17.380447   65526 out.go:303] Setting JSON to false
	I0117 00:14:17.381444   65526 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7004,"bootTime":1705443454,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0117 00:14:17.381500   65526 start.go:138] virtualization: kvm guest
	I0117 00:14:17.384129   65526 out.go:177] * [newest-cni-353558] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0117 00:14:17.385744   65526 notify.go:220] Checking for updates...
	I0117 00:14:17.387023   65526 out.go:177]   - MINIKUBE_LOCATION=17975
	I0117 00:14:17.388407   65526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0117 00:14:17.389946   65526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:14:17.391408   65526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0117 00:14:17.392707   65526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0117 00:14:17.394130   65526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0117 00:14:17.396061   65526 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0117 00:14:17.396200   65526 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0117 00:14:17.396329   65526 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0117 00:14:17.396440   65526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0117 00:14:17.435126   65526 out.go:177] * Using the kvm2 driver based on user configuration
	I0117 00:14:17.436744   65526 start.go:298] selected driver: kvm2
	I0117 00:14:17.436760   65526 start.go:902] validating driver "kvm2" against <nil>
	I0117 00:14:17.436771   65526 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0117 00:14:17.437449   65526 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:14:17.437524   65526 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0117 00:14:17.452238   65526 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0117 00:14:17.452281   65526 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0117 00:14:17.452315   65526 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0117 00:14:17.452561   65526 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0117 00:14:17.452624   65526 cni.go:84] Creating CNI manager for ""
	I0117 00:14:17.452636   65526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:14:17.452648   65526 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0117 00:14:17.452660   65526 start_flags.go:321] config:
	{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:14:17.452865   65526 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:14:17.455209   65526 out.go:177] * Starting control plane node newest-cni-353558 in cluster newest-cni-353558
	I0117 00:14:17.456584   65526 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0117 00:14:17.456621   65526 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0117 00:14:17.456635   65526 cache.go:56] Caching tarball of preloaded images
	I0117 00:14:17.456735   65526 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0117 00:14:17.456751   65526 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0117 00:14:17.456860   65526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json ...
	I0117 00:14:17.456887   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json: {Name:mk0d5962b113fb75fdcfa0c650bb25a6b4344e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:17.457040   65526 start.go:365] acquiring machines lock for newest-cni-353558: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0117 00:14:17.457084   65526 start.go:369] acquired machines lock for "newest-cni-353558" in 27.784µs
	I0117 00:14:17.457107   65526 start.go:93] Provisioning new machine with config: &{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:14:17.457205   65526 start.go:125] createHost starting for "" (driver="kvm2")
	I0117 00:14:17.459671   65526 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0117 00:14:17.459885   65526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:14:17.459941   65526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:14:17.474062   65526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0117 00:14:17.474490   65526 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:14:17.475059   65526 main.go:141] libmachine: Using API Version  1
	I0117 00:14:17.475086   65526 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:14:17.475395   65526 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:14:17.475582   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:14:17.475758   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:17.475958   65526 start.go:159] libmachine.API.Create for "newest-cni-353558" (driver="kvm2")
	I0117 00:14:17.476015   65526 client.go:168] LocalClient.Create starting
	I0117 00:14:17.476050   65526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem
	I0117 00:14:17.476092   65526 main.go:141] libmachine: Decoding PEM data...
	I0117 00:14:17.476125   65526 main.go:141] libmachine: Parsing certificate...
	I0117 00:14:17.476195   65526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem
	I0117 00:14:17.476222   65526 main.go:141] libmachine: Decoding PEM data...
	I0117 00:14:17.476245   65526 main.go:141] libmachine: Parsing certificate...
	I0117 00:14:17.476274   65526 main.go:141] libmachine: Running pre-create checks...
	I0117 00:14:17.476284   65526 main.go:141] libmachine: (newest-cni-353558) Calling .PreCreateCheck
	I0117 00:14:17.476630   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetConfigRaw
	I0117 00:14:17.477035   65526 main.go:141] libmachine: Creating machine...
	I0117 00:14:17.477052   65526 main.go:141] libmachine: (newest-cni-353558) Calling .Create
	I0117 00:14:17.477181   65526 main.go:141] libmachine: (newest-cni-353558) Creating KVM machine...
	I0117 00:14:17.478423   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found existing default KVM network
	I0117 00:14:17.479608   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.479471   65549 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:f1:c5} reservation:<nil>}
	I0117 00:14:17.480395   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.480314   65549 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d0:58:76} reservation:<nil>}
	I0117 00:14:17.481257   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.481193   65549 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1f:51:3b} reservation:<nil>}
	I0117 00:14:17.482361   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.482268   65549 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027d810}
	I0117 00:14:17.487748   65526 main.go:141] libmachine: (newest-cni-353558) DBG | trying to create private KVM network mk-newest-cni-353558 192.168.72.0/24...
	I0117 00:14:17.563794   65526 main.go:141] libmachine: (newest-cni-353558) Setting up store path in /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558 ...
	I0117 00:14:17.563879   65526 main.go:141] libmachine: (newest-cni-353558) Building disk image from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0117 00:14:17.563899   65526 main.go:141] libmachine: (newest-cni-353558) DBG | private KVM network mk-newest-cni-353558 192.168.72.0/24 created
	I0117 00:14:17.563918   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.563688   65549 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0117 00:14:17.564016   65526 main.go:141] libmachine: (newest-cni-353558) Downloading /home/jenkins/minikube-integration/17975-6238/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0117 00:14:17.773157   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.773009   65549 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa...
	I0117 00:14:17.909702   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.909553   65549 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/newest-cni-353558.rawdisk...
	I0117 00:14:17.909742   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Writing magic tar header
	I0117 00:14:17.909764   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Writing SSH key tar header
	I0117 00:14:17.909779   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:17.909701   65549 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558 ...
	I0117 00:14:17.909873   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558
	I0117 00:14:17.909920   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558 (perms=drwx------)
	I0117 00:14:17.909939   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube/machines
	I0117 00:14:17.909957   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238/.minikube
	I0117 00:14:17.909967   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17975-6238
	I0117 00:14:17.909977   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0117 00:14:17.909983   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home/jenkins
	I0117 00:14:17.909991   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube/machines (perms=drwxr-xr-x)
	I0117 00:14:17.910005   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Checking permissions on dir: /home
	I0117 00:14:17.910023   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Skipping /home - not owner
	I0117 00:14:17.910042   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238/.minikube (perms=drwxr-xr-x)
	I0117 00:14:17.910058   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins/minikube-integration/17975-6238 (perms=drwxrwxr-x)
	I0117 00:14:17.910093   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0117 00:14:17.910117   65526 main.go:141] libmachine: (newest-cni-353558) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0117 00:14:17.910129   65526 main.go:141] libmachine: (newest-cni-353558) Creating domain...
	I0117 00:14:17.911257   65526 main.go:141] libmachine: (newest-cni-353558) define libvirt domain using xml: 
	I0117 00:14:17.911276   65526 main.go:141] libmachine: (newest-cni-353558) <domain type='kvm'>
	I0117 00:14:17.911284   65526 main.go:141] libmachine: (newest-cni-353558)   <name>newest-cni-353558</name>
	I0117 00:14:17.911290   65526 main.go:141] libmachine: (newest-cni-353558)   <memory unit='MiB'>2200</memory>
	I0117 00:14:17.911330   65526 main.go:141] libmachine: (newest-cni-353558)   <vcpu>2</vcpu>
	I0117 00:14:17.911389   65526 main.go:141] libmachine: (newest-cni-353558)   <features>
	I0117 00:14:17.911408   65526 main.go:141] libmachine: (newest-cni-353558)     <acpi/>
	I0117 00:14:17.911421   65526 main.go:141] libmachine: (newest-cni-353558)     <apic/>
	I0117 00:14:17.911434   65526 main.go:141] libmachine: (newest-cni-353558)     <pae/>
	I0117 00:14:17.911449   65526 main.go:141] libmachine: (newest-cni-353558)     
	I0117 00:14:17.911463   65526 main.go:141] libmachine: (newest-cni-353558)   </features>
	I0117 00:14:17.911474   65526 main.go:141] libmachine: (newest-cni-353558)   <cpu mode='host-passthrough'>
	I0117 00:14:17.911489   65526 main.go:141] libmachine: (newest-cni-353558)   
	I0117 00:14:17.911501   65526 main.go:141] libmachine: (newest-cni-353558)   </cpu>
	I0117 00:14:17.911513   65526 main.go:141] libmachine: (newest-cni-353558)   <os>
	I0117 00:14:17.911523   65526 main.go:141] libmachine: (newest-cni-353558)     <type>hvm</type>
	I0117 00:14:17.911537   65526 main.go:141] libmachine: (newest-cni-353558)     <boot dev='cdrom'/>
	I0117 00:14:17.911548   65526 main.go:141] libmachine: (newest-cni-353558)     <boot dev='hd'/>
	I0117 00:14:17.911561   65526 main.go:141] libmachine: (newest-cni-353558)     <bootmenu enable='no'/>
	I0117 00:14:17.911573   65526 main.go:141] libmachine: (newest-cni-353558)   </os>
	I0117 00:14:17.911585   65526 main.go:141] libmachine: (newest-cni-353558)   <devices>
	I0117 00:14:17.911600   65526 main.go:141] libmachine: (newest-cni-353558)     <disk type='file' device='cdrom'>
	I0117 00:14:17.911612   65526 main.go:141] libmachine: (newest-cni-353558)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/boot2docker.iso'/>
	I0117 00:14:17.911640   65526 main.go:141] libmachine: (newest-cni-353558)       <target dev='hdc' bus='scsi'/>
	I0117 00:14:17.911663   65526 main.go:141] libmachine: (newest-cni-353558)       <readonly/>
	I0117 00:14:17.911678   65526 main.go:141] libmachine: (newest-cni-353558)     </disk>
	I0117 00:14:17.911690   65526 main.go:141] libmachine: (newest-cni-353558)     <disk type='file' device='disk'>
	I0117 00:14:17.911709   65526 main.go:141] libmachine: (newest-cni-353558)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0117 00:14:17.911726   65526 main.go:141] libmachine: (newest-cni-353558)       <source file='/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/newest-cni-353558.rawdisk'/>
	I0117 00:14:17.911755   65526 main.go:141] libmachine: (newest-cni-353558)       <target dev='hda' bus='virtio'/>
	I0117 00:14:17.911780   65526 main.go:141] libmachine: (newest-cni-353558)     </disk>
	I0117 00:14:17.911806   65526 main.go:141] libmachine: (newest-cni-353558)     <interface type='network'>
	I0117 00:14:17.911829   65526 main.go:141] libmachine: (newest-cni-353558)       <source network='mk-newest-cni-353558'/>
	I0117 00:14:17.911843   65526 main.go:141] libmachine: (newest-cni-353558)       <model type='virtio'/>
	I0117 00:14:17.911861   65526 main.go:141] libmachine: (newest-cni-353558)     </interface>
	I0117 00:14:17.911870   65526 main.go:141] libmachine: (newest-cni-353558)     <interface type='network'>
	I0117 00:14:17.911878   65526 main.go:141] libmachine: (newest-cni-353558)       <source network='default'/>
	I0117 00:14:17.911884   65526 main.go:141] libmachine: (newest-cni-353558)       <model type='virtio'/>
	I0117 00:14:17.911891   65526 main.go:141] libmachine: (newest-cni-353558)     </interface>
	I0117 00:14:17.911901   65526 main.go:141] libmachine: (newest-cni-353558)     <serial type='pty'>
	I0117 00:14:17.911909   65526 main.go:141] libmachine: (newest-cni-353558)       <target port='0'/>
	I0117 00:14:17.911916   65526 main.go:141] libmachine: (newest-cni-353558)     </serial>
	I0117 00:14:17.911924   65526 main.go:141] libmachine: (newest-cni-353558)     <console type='pty'>
	I0117 00:14:17.911932   65526 main.go:141] libmachine: (newest-cni-353558)       <target type='serial' port='0'/>
	I0117 00:14:17.911940   65526 main.go:141] libmachine: (newest-cni-353558)     </console>
	I0117 00:14:17.911946   65526 main.go:141] libmachine: (newest-cni-353558)     <rng model='virtio'>
	I0117 00:14:17.911955   65526 main.go:141] libmachine: (newest-cni-353558)       <backend model='random'>/dev/random</backend>
	I0117 00:14:17.911963   65526 main.go:141] libmachine: (newest-cni-353558)     </rng>
	I0117 00:14:17.911973   65526 main.go:141] libmachine: (newest-cni-353558)     
	I0117 00:14:17.911992   65526 main.go:141] libmachine: (newest-cni-353558)     
	I0117 00:14:17.912009   65526 main.go:141] libmachine: (newest-cni-353558)   </devices>
	I0117 00:14:17.912021   65526 main.go:141] libmachine: (newest-cni-353558) </domain>
	I0117 00:14:17.912032   65526 main.go:141] libmachine: (newest-cni-353558) 
	I0117 00:14:17.916616   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:10:cc:4f in network default
	I0117 00:14:17.917305   65526 main.go:141] libmachine: (newest-cni-353558) Ensuring networks are active...
	I0117 00:14:17.917324   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:17.918045   65526 main.go:141] libmachine: (newest-cni-353558) Ensuring network default is active
	I0117 00:14:17.918383   65526 main.go:141] libmachine: (newest-cni-353558) Ensuring network mk-newest-cni-353558 is active
	I0117 00:14:17.918993   65526 main.go:141] libmachine: (newest-cni-353558) Getting domain xml...
	I0117 00:14:17.919791   65526 main.go:141] libmachine: (newest-cni-353558) Creating domain...
	I0117 00:14:19.161969   65526 main.go:141] libmachine: (newest-cni-353558) Waiting to get IP...
	I0117 00:14:19.162780   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:19.163265   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:19.163295   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:19.163216   65549 retry.go:31] will retry after 294.437649ms: waiting for machine to come up
	I0117 00:14:19.459829   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:19.460279   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:19.460307   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:19.460242   65549 retry.go:31] will retry after 375.229572ms: waiting for machine to come up
	I0117 00:14:19.837507   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:19.837968   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:19.838021   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:19.837924   65549 retry.go:31] will retry after 392.416914ms: waiting for machine to come up
	I0117 00:14:20.231416   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:20.231903   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:20.231937   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:20.231842   65549 retry.go:31] will retry after 396.093237ms: waiting for machine to come up
	I0117 00:14:20.629431   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:20.629846   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:20.629878   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:20.629798   65549 retry.go:31] will retry after 545.861659ms: waiting for machine to come up
	I0117 00:14:21.177265   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:21.177719   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:21.177752   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:21.177652   65549 retry.go:31] will retry after 838.165736ms: waiting for machine to come up
	I0117 00:14:22.017577   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:22.018021   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:22.018072   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:22.017995   65549 retry.go:31] will retry after 1.079004477s: waiting for machine to come up
	I0117 00:14:23.098126   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:23.098610   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:23.098640   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:23.098572   65549 retry.go:31] will retry after 1.318670581s: waiting for machine to come up
	I0117 00:14:24.419026   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:24.419600   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:24.419626   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:24.419533   65549 retry.go:31] will retry after 1.855076849s: waiting for machine to come up
	I0117 00:14:26.276932   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:26.277436   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:26.277470   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:26.277388   65549 retry.go:31] will retry after 1.907428442s: waiting for machine to come up
	I0117 00:14:28.186226   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:28.186745   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:28.186774   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:28.186697   65549 retry.go:31] will retry after 2.481916767s: waiting for machine to come up
	I0117 00:14:30.669879   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:30.670409   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:30.670440   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:30.670370   65549 retry.go:31] will retry after 2.775251799s: waiting for machine to come up
	I0117 00:14:33.447908   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:33.448505   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:33.448539   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:33.448454   65549 retry.go:31] will retry after 3.427216804s: waiting for machine to come up
	I0117 00:14:36.879710   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:36.880436   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:14:36.880464   65526 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:14:36.880379   65549 retry.go:31] will retry after 4.20897136s: waiting for machine to come up
	I0117 00:14:41.091335   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.091831   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has current primary IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.091873   65526 main.go:141] libmachine: (newest-cni-353558) Found IP for machine: 192.168.72.238
	I0117 00:14:41.091897   65526 main.go:141] libmachine: (newest-cni-353558) Reserving static IP address...
	I0117 00:14:41.092309   65526 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find host DHCP lease matching {name: "newest-cni-353558", mac: "52:54:00:54:c2:59", ip: "192.168.72.238"} in network mk-newest-cni-353558
	I0117 00:14:41.169703   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Getting to WaitForSSH function...
	I0117 00:14:41.169755   65526 main.go:141] libmachine: (newest-cni-353558) Reserved static IP address: 192.168.72.238
	I0117 00:14:41.169771   65526 main.go:141] libmachine: (newest-cni-353558) Waiting for SSH to be available...
	I0117 00:14:41.172788   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.173240   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.173271   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.173451   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Using SSH client type: external
	I0117 00:14:41.173494   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa (-rw-------)
	I0117 00:14:41.173527   65526 main.go:141] libmachine: (newest-cni-353558) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0117 00:14:41.173549   65526 main.go:141] libmachine: (newest-cni-353558) DBG | About to run SSH command:
	I0117 00:14:41.173566   65526 main.go:141] libmachine: (newest-cni-353558) DBG | exit 0
	I0117 00:14:41.270401   65526 main.go:141] libmachine: (newest-cni-353558) DBG | SSH cmd err, output: <nil>: 
	I0117 00:14:41.270694   65526 main.go:141] libmachine: (newest-cni-353558) KVM machine creation complete!
	I0117 00:14:41.271033   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetConfigRaw
	I0117 00:14:41.271572   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:41.271860   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:41.272091   65526 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0117 00:14:41.272110   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetState
	I0117 00:14:41.273721   65526 main.go:141] libmachine: Detecting operating system of created instance...
	I0117 00:14:41.273736   65526 main.go:141] libmachine: Waiting for SSH to be available...
	I0117 00:14:41.273746   65526 main.go:141] libmachine: Getting to WaitForSSH function...
	I0117 00:14:41.273756   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.276761   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.277117   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.277149   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.277272   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:41.277444   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.277642   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.277792   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:41.277948   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:41.278388   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:41.278413   65526 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0117 00:14:41.413720   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0117 00:14:41.413755   65526 main.go:141] libmachine: Detecting the provisioner...
	I0117 00:14:41.413768   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.416772   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.417153   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.417181   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.417326   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:41.417544   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.417716   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.417844   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:41.417994   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:41.418357   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:41.418374   65526 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0117 00:14:41.550972   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0117 00:14:41.551056   65526 main.go:141] libmachine: found compatible host: buildroot
	I0117 00:14:41.551071   65526 main.go:141] libmachine: Provisioning with buildroot...
	I0117 00:14:41.551086   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:14:41.551375   65526 buildroot.go:166] provisioning hostname "newest-cni-353558"
	I0117 00:14:41.551407   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:14:41.551587   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.554631   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.554975   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.555000   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.555219   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:41.555433   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.555680   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.555867   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:41.556065   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:41.556379   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:41.556394   65526 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-353558 && echo "newest-cni-353558" | sudo tee /etc/hostname
	I0117 00:14:41.699401   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-353558
	
	I0117 00:14:41.699439   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.702232   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.702628   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.702661   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.702849   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:41.703032   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.703215   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.703373   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:41.703554   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:41.703925   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:41.703943   65526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-353558' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-353558/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-353558' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0117 00:14:41.842387   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0117 00:14:41.842418   65526 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0117 00:14:41.842493   65526 buildroot.go:174] setting up certificates
	I0117 00:14:41.842511   65526 provision.go:83] configureAuth start
	I0117 00:14:41.842534   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:14:41.842861   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:14:41.845551   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.845966   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.845996   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.846176   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.849099   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.849494   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.849519   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.849670   65526 provision.go:138] copyHostCerts
	I0117 00:14:41.849752   65526 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0117 00:14:41.849768   65526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0117 00:14:41.849848   65526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0117 00:14:41.849980   65526 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0117 00:14:41.849992   65526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0117 00:14:41.850032   65526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0117 00:14:41.850129   65526 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0117 00:14:41.850141   65526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0117 00:14:41.850187   65526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0117 00:14:41.850271   65526 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.newest-cni-353558 san=[192.168.72.238 192.168.72.238 localhost 127.0.0.1 minikube newest-cni-353558]
	I0117 00:14:41.964828   65526 provision.go:172] copyRemoteCerts
	I0117 00:14:41.964883   65526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0117 00:14:41.964914   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:41.967574   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.968037   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:41.968062   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:41.968250   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:41.968451   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:41.968670   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:41.968843   65526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:14:42.064538   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0117 00:14:42.086272   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0117 00:14:42.106928   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0117 00:14:42.127685   65526 provision.go:86] duration metric: configureAuth took 285.154871ms
	I0117 00:14:42.127719   65526 buildroot.go:189] setting minikube options for container-runtime
	I0117 00:14:42.127915   65526 config.go:182] Loaded profile config "newest-cni-353558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0117 00:14:42.127984   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:42.130879   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.131199   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.131239   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.131388   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:42.131615   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.131841   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.131968   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:42.132166   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:42.132546   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:42.132565   65526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0117 00:14:42.459843   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0117 00:14:42.459928   65526 main.go:141] libmachine: Checking connection to Docker...
	I0117 00:14:42.459947   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetURL
	I0117 00:14:42.461351   65526 main.go:141] libmachine: (newest-cni-353558) DBG | Using libvirt version 6000000
	I0117 00:14:42.463826   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.464218   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.464251   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.464414   65526 main.go:141] libmachine: Docker is up and running!
	I0117 00:14:42.464431   65526 main.go:141] libmachine: Reticulating splines...
	I0117 00:14:42.464440   65526 client.go:171] LocalClient.Create took 24.988413342s
	I0117 00:14:42.464466   65526 start.go:167] duration metric: libmachine.API.Create for "newest-cni-353558" took 24.988510119s
	I0117 00:14:42.464478   65526 start.go:300] post-start starting for "newest-cni-353558" (driver="kvm2")
	I0117 00:14:42.464493   65526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0117 00:14:42.464515   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:42.464809   65526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0117 00:14:42.464834   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:42.467122   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.467431   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.467458   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.467596   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:42.467768   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.467941   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:42.468091   65526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:14:42.563965   65526 ssh_runner.go:195] Run: cat /etc/os-release
	I0117 00:14:42.568043   65526 info.go:137] Remote host: Buildroot 2021.02.12
	I0117 00:14:42.568070   65526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0117 00:14:42.568128   65526 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0117 00:14:42.568224   65526 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0117 00:14:42.568334   65526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0117 00:14:42.576959   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0117 00:14:42.601264   65526 start.go:303] post-start completed in 136.769582ms
	I0117 00:14:42.601313   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetConfigRaw
	I0117 00:14:42.602031   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:14:42.604840   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.605244   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.605279   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.605521   65526 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json ...
	I0117 00:14:42.605706   65526 start.go:128] duration metric: createHost completed in 25.148490337s
	I0117 00:14:42.605727   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:42.608289   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.608610   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.608636   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.608810   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:42.609028   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.609201   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.609409   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:42.609619   65526 main.go:141] libmachine: Using SSH client type: native
	I0117 00:14:42.609930   65526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:14:42.609942   65526 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0117 00:14:42.743028   65526 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705450482.716246159
	
	I0117 00:14:42.743058   65526 fix.go:206] guest clock: 1705450482.716246159
	I0117 00:14:42.743072   65526 fix.go:219] Guest: 2024-01-17 00:14:42.716246159 +0000 UTC Remote: 2024-01-17 00:14:42.605717435 +0000 UTC m=+25.277366573 (delta=110.528724ms)
	I0117 00:14:42.743095   65526 fix.go:190] guest clock delta is within tolerance: 110.528724ms
	I0117 00:14:42.743102   65526 start.go:83] releasing machines lock for "newest-cni-353558", held for 25.286006719s
	I0117 00:14:42.743131   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:42.743429   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:14:42.746787   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.747125   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.747150   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.747315   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:42.747836   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:42.748082   65526 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:14:42.748177   65526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0117 00:14:42.748235   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:42.748330   65526 ssh_runner.go:195] Run: cat /version.json
	I0117 00:14:42.748379   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:14:42.750984   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.751015   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.751394   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.751418   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.751446   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:42.751462   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:42.751542   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:42.751718   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:14:42.751731   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.751919   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:42.751942   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:14:42.752076   65526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:14:42.752154   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:14:42.752334   65526 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:14:42.881513   65526 ssh_runner.go:195] Run: systemctl --version
	I0117 00:14:42.887424   65526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0117 00:14:43.046436   65526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0117 00:14:43.052008   65526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0117 00:14:43.052086   65526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0117 00:14:43.066373   65526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0117 00:14:43.066399   65526 start.go:475] detecting cgroup driver to use...
	I0117 00:14:43.066466   65526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0117 00:14:43.081996   65526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0117 00:14:43.093804   65526 docker.go:217] disabling cri-docker service (if available) ...
	I0117 00:14:43.093867   65526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0117 00:14:43.105436   65526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0117 00:14:43.117649   65526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0117 00:14:43.231212   65526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0117 00:14:43.358454   65526 docker.go:233] disabling docker service ...
	I0117 00:14:43.358550   65526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0117 00:14:43.372640   65526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0117 00:14:43.384129   65526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0117 00:14:43.489979   65526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0117 00:14:43.598837   65526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0117 00:14:43.611596   65526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0117 00:14:43.628498   65526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0117 00:14:43.628568   65526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:14:43.638944   65526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0117 00:14:43.639031   65526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:14:43.649643   65526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:14:43.659994   65526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:14:43.670425   65526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0117 00:14:43.680350   65526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0117 00:14:43.689182   65526 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0117 00:14:43.689247   65526 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0117 00:14:43.702589   65526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0117 00:14:43.711682   65526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0117 00:14:43.819604   65526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0117 00:14:43.989766   65526 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0117 00:14:43.989843   65526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0117 00:14:43.994146   65526 start.go:543] Will wait 60s for crictl version
	I0117 00:14:43.994203   65526 ssh_runner.go:195] Run: which crictl
	I0117 00:14:43.997446   65526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0117 00:14:44.043264   65526 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0117 00:14:44.043362   65526 ssh_runner.go:195] Run: crio --version
	I0117 00:14:44.092053   65526 ssh_runner.go:195] Run: crio --version
	I0117 00:14:44.140878   65526 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0117 00:14:44.142361   65526 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:14:44.145219   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:44.145652   65526 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:14:44.145681   65526 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:14:44.145847   65526 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0117 00:14:44.149603   65526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0117 00:14:44.162685   65526 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0117 00:14:44.164254   65526 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0117 00:14:44.164325   65526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0117 00:14:44.196633   65526 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0117 00:14:44.196709   65526 ssh_runner.go:195] Run: which lz4
	I0117 00:14:44.200617   65526 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0117 00:14:44.204569   65526 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0117 00:14:44.204601   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0117 00:14:45.791396   65526 crio.go:444] Took 1.590806 seconds to copy over tarball
	I0117 00:14:45.791480   65526 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0117 00:14:48.259183   65526 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.467670875s)
	I0117 00:14:48.259208   65526 crio.go:451] Took 2.467781 seconds to extract the tarball
	I0117 00:14:48.259226   65526 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0117 00:14:48.303787   65526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0117 00:14:48.389685   65526 crio.go:496] all images are preloaded for cri-o runtime.
	I0117 00:14:48.389717   65526 cache_images.go:84] Images are preloaded, skipping loading
	I0117 00:14:48.389818   65526 ssh_runner.go:195] Run: crio config
	I0117 00:14:48.463798   65526 cni.go:84] Creating CNI manager for ""
	I0117 00:14:48.463826   65526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:14:48.463853   65526 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0117 00:14:48.463886   65526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-353558 NodeName:newest-cni-353558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0117 00:14:48.464099   65526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-353558"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0117 00:14:48.464204   65526 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-353558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0117 00:14:48.464275   65526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0117 00:14:48.475072   65526 binaries.go:44] Found k8s binaries, skipping transfer
	I0117 00:14:48.475148   65526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0117 00:14:48.484738   65526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0117 00:14:48.501272   65526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0117 00:14:48.518699   65526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0117 00:14:48.535800   65526 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I0117 00:14:48.539612   65526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0117 00:14:48.551305   65526 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558 for IP: 192.168.72.238
	I0117 00:14:48.551341   65526 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:48.551508   65526 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0117 00:14:48.551563   65526 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0117 00:14:48.551617   65526 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.key
	I0117 00:14:48.551636   65526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.crt with IP's: []
	I0117 00:14:49.199557   65526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.crt ...
	I0117 00:14:49.199587   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.crt: {Name:mka912e08111440e7aec16e109a54058b6f8e346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.199747   65526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.key ...
	I0117 00:14:49.199758   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/client.key: {Name:mk12173f01262f800cde12bb3ec03baef5880e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.199824   65526 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key.ce7be801
	I0117 00:14:49.199838   65526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt.ce7be801 with IP's: [192.168.72.238 10.96.0.1 127.0.0.1 10.0.0.1]
	I0117 00:14:49.362400   65526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt.ce7be801 ...
	I0117 00:14:49.362430   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt.ce7be801: {Name:mkfc843a38589f1afd0bc4c0b5f18eb6066f7227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.362596   65526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key.ce7be801 ...
	I0117 00:14:49.362617   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key.ce7be801: {Name:mk2cfc5179b140730ac0c8c6d35fbee6be9b0563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.362710   65526 certs.go:337] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt.ce7be801 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt
	I0117 00:14:49.362814   65526 certs.go:341] copying /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key.ce7be801 -> /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key
	I0117 00:14:49.362893   65526 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.key
	I0117 00:14:49.362915   65526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.crt with IP's: []
	I0117 00:14:49.456322   65526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.crt ...
	I0117 00:14:49.456361   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.crt: {Name:mkaa7153ef4ede9a2098fc4dc381d7471af62667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.456529   65526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.key ...
	I0117 00:14:49.456544   65526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.key: {Name:mka0b15b4b1304a391b20c7612dec831d81e7007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:14:49.456751   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0117 00:14:49.456796   65526 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0117 00:14:49.456810   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0117 00:14:49.456847   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0117 00:14:49.456880   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0117 00:14:49.456917   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0117 00:14:49.456978   65526 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0117 00:14:49.457616   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0117 00:14:49.485639   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0117 00:14:49.507597   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0117 00:14:49.528870   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0117 00:14:49.551936   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0117 00:14:49.576933   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0117 00:14:49.601406   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0117 00:14:49.624137   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0117 00:14:49.648477   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0117 00:14:49.669218   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0117 00:14:49.689853   65526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0117 00:14:49.710915   65526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0117 00:14:49.726076   65526 ssh_runner.go:195] Run: openssl version
	I0117 00:14:49.731637   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0117 00:14:49.743108   65526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0117 00:14:49.747524   65526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0117 00:14:49.747590   65526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0117 00:14:49.752916   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0117 00:14:49.762713   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0117 00:14:49.773544   65526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0117 00:14:49.777996   65526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0117 00:14:49.778056   65526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0117 00:14:49.783178   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0117 00:14:49.791902   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0117 00:14:49.803085   65526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0117 00:14:49.807616   65526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0117 00:14:49.807687   65526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0117 00:14:49.813419   65526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0117 00:14:49.824096   65526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0117 00:14:49.828333   65526 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0117 00:14:49.828400   65526 kubeadm.go:404] StartCluster: {Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:14:49.828473   65526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0117 00:14:49.828511   65526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0117 00:14:49.870750   65526 cri.go:89] found id: ""
	I0117 00:14:49.870811   65526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0117 00:14:49.880419   65526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0117 00:14:49.890014   65526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0117 00:14:49.900478   65526 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0117 00:14:49.900524   65526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0117 00:14:50.016155   65526 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0117 00:14:50.016228   65526 kubeadm.go:322] [preflight] Running pre-flight checks
	I0117 00:14:50.246725   65526 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0117 00:14:50.246880   65526 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0117 00:14:50.247018   65526 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0117 00:14:50.487546   65526 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0117 00:14:50.619717   65526 out.go:204]   - Generating certificates and keys ...
	I0117 00:14:50.619861   65526 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0117 00:14:50.619981   65526 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0117 00:14:50.728135   65526 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0117 00:14:50.882457   65526 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0117 00:14:51.130762   65526 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0117 00:14:51.215416   65526 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0117 00:14:51.373812   65526 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0117 00:14:51.374184   65526 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-353558] and IPs [192.168.72.238 127.0.0.1 ::1]
	I0117 00:14:51.635338   65526 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0117 00:14:51.635697   65526 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-353558] and IPs [192.168.72.238 127.0.0.1 ::1]
	I0117 00:14:51.922952   65526 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0117 00:14:52.194497   65526 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0117 00:14:52.328307   65526 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0117 00:14:52.328568   65526 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0117 00:14:52.560992   65526 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0117 00:14:52.744953   65526 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0117 00:14:52.947286   65526 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0117 00:14:53.156457   65526 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0117 00:14:53.259119   65526 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0117 00:14:53.259785   65526 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0117 00:14:53.262198   65526 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0117 00:14:53.263940   65526 out.go:204]   - Booting up control plane ...
	I0117 00:14:53.264039   65526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0117 00:14:53.264660   65526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0117 00:14:53.266418   65526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0117 00:14:53.284116   65526 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0117 00:14:53.285064   65526 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0117 00:14:53.285127   65526 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0117 00:14:53.417754   65526 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0117 00:15:00.920813   65526 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504941 seconds
	I0117 00:15:00.942920   65526 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:15:00.966398   65526 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:15:01.508120   65526 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:15:01.508343   65526 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-353558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:15:02.026033   65526 kubeadm.go:322] [bootstrap-token] Using token: 8i01kc.r310x3lcyoek13a4
	I0117 00:15:02.027501   65526 out.go:204]   - Configuring RBAC rules ...
	I0117 00:15:02.027641   65526 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:15:02.040105   65526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:15:02.048839   65526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:15:02.055616   65526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:15:02.059643   65526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:15:02.063272   65526 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:15:02.077945   65526 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:15:02.347889   65526 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:15:02.445672   65526 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:15:02.447796   65526 kubeadm.go:322] 
	I0117 00:15:02.447876   65526 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:15:02.447908   65526 kubeadm.go:322] 
	I0117 00:15:02.448044   65526 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:15:02.448073   65526 kubeadm.go:322] 
	I0117 00:15:02.448104   65526 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:15:02.448208   65526 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:15:02.448286   65526 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:15:02.448293   65526 kubeadm.go:322] 
	I0117 00:15:02.448374   65526 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:15:02.448379   65526 kubeadm.go:322] 
	I0117 00:15:02.448422   65526 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:15:02.448428   65526 kubeadm.go:322] 
	I0117 00:15:02.448498   65526 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:15:02.448591   65526 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:15:02.448680   65526 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:15:02.448684   65526 kubeadm.go:322] 
	I0117 00:15:02.448758   65526 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:15:02.448868   65526 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:15:02.448883   65526 kubeadm.go:322] 
	I0117 00:15:02.448991   65526 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8i01kc.r310x3lcyoek13a4 \
	I0117 00:15:02.449126   65526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:15:02.449159   65526 kubeadm.go:322] 	--control-plane 
	I0117 00:15:02.449167   65526 kubeadm.go:322] 
	I0117 00:15:02.449283   65526 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:15:02.449303   65526 kubeadm.go:322] 
	I0117 00:15:02.449425   65526 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8i01kc.r310x3lcyoek13a4 \
	I0117 00:15:02.449577   65526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:15:02.449981   65526 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:15:02.450015   65526 cni.go:84] Creating CNI manager for ""
	I0117 00:15:02.450027   65526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:15:02.452110   65526 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:15:02.453589   65526 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:15:02.499476   65526 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:15:02.546364   65526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:15:02.546455   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:02.546472   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=newest-cni-353558 minikube.k8s.io/updated_at=2024_01_17T00_15_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:02.851532   65526 ops.go:34] apiserver oom_adj: -16
	I0117 00:15:02.851673   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:03.351835   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:03.852214   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:04.351863   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:04.852465   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:05.352223   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:05.852110   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:06.352709   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:06.852330   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:07.352689   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:07.851819   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:08.351808   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:08.851784   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:09.352340   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:09.851705   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:10.351741   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:10.851725   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:11.352581   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:11.852273   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:15:12.351955   65526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:14 UTC, ends at Wed 2024-01-17 00:15:14 UTC. --
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.056333771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450514056320563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=137bcc08-51c3-4472-b1ac-b5631e2b5324 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.057018825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e8626b1-663e-476c-af44-285bff87d1bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.057088048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e8626b1-663e-476c-af44-285bff87d1bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.057305118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e8626b1-663e-476c-af44-285bff87d1bb name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.096711248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=94694eed-7919-4dc5-b182-57f8814bd906 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.096790513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=94694eed-7919-4dc5-b182-57f8814bd906 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.097729295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6460f677-a933-4756-a46d-bf9cdee6154c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.098113665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450514098101797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6460f677-a933-4756-a46d-bf9cdee6154c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.098577477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cdae990b-0ff8-4844-99c3-24eb609e4b77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.098649060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cdae990b-0ff8-4844-99c3-24eb609e4b77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.098906637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cdae990b-0ff8-4844-99c3-24eb609e4b77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.138644650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f99b3d22-a57f-4a40-a760-62209b0bdafd name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.138726031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f99b3d22-a57f-4a40-a760-62209b0bdafd name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.141193804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=151107fa-760d-4cf1-9db0-d5ce399bfc46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.141507792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450514141495194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=151107fa-760d-4cf1-9db0-d5ce399bfc46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.142234052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=32d8e326-63a0-4b8a-b348-cd13465e6812 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.142305786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=32d8e326-63a0-4b8a-b348-cd13465e6812 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.142521803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=32d8e326-63a0-4b8a-b348-cd13465e6812 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.179935069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=45591ef3-b0d2-4154-9658-41f9bd7fe813 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.180047428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=45591ef3-b0d2-4154-9658-41f9bd7fe813 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.181635303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f05e788a-9aba-42bd-9b66-187303524ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.182044372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450514182026966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f05e788a-9aba-42bd-9b66-187303524ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.183087474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9893d28b-8136-4c4d-b513-0843d7beb68b name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.183203318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9893d28b-8136-4c4d-b513-0843d7beb68b name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:14 no-preload-085322 crio[720]: time="2024-01-17 00:15:14.187791667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705449329706692719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2010450e59e244cc3e921bb0db6e770f15b91814fe3a7e0dc0922bbd8fe6955,PodSandboxId:63a3f258785a1a259d1c928c1e962f99bff0fb30b133d8ae21b237068504817e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449311816369096,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1680c487-b710-4a5a-8067-25277e4b4735,},Annotations:map[string]string{io.kubernetes.container.hash: 771764cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782,PodSandboxId:14b2a7aea6c5f9d99e784d5108d1f7572a94626a4e0625ce547037a467a09756,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705449306332592430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ptq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b52129d-1f2b-49e8-abeb-b2737a6a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: f29a11ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6,PodSandboxId:2e2d4cf3252ef3a0e774a93dc0a09d35f554ea08d17df35d49b44289a3ec0b89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705449298448481175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 60efc797-82b9-4614-8e43-ccf7e2d72911,},Annotations:map[string]string{io.kubernetes.container.hash: 9168748a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f,PodSandboxId:75aac7f7149bc90e0b8f0058a5730cf5fb5f38c09e2010fee49da3a802451152,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705449298390710731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-64z5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f910ca-b57
7-47f6-a01a-4c7efadd20e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6269e059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d,PodSandboxId:2949cc2da8fdc7ea4930681e5a441428e5d509601af52d09d6c70e4101d62ce9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705449292972373583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf6a0d411260ec1bb4258d90f19b895,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 703878e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26,PodSandboxId:19a8a31f45f0cb12903c784186029ee87353c1611469872a8a04bf18dfaffbd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705449292886993479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77a34a113ca90a63dca3203f2dbb05b6,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1,PodSandboxId:1f366dc6c696e906b678d43c7aaf63d9cea9ac02fa177f9b23c4e1ceb3daa1f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705449292584472793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092c35adc55630b12575679316f57b37,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 85fe800c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db,PodSandboxId:25807c34c111e843d8c46ea70505039bf0a251e81cba8e70c1e1ede3e967a57a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705449292459913502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-085322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e824be6be133b30c3375f7c4b77ab75,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9893d28b-8136-4c4d-b513-0843d7beb68b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60416d35ab032       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   2e2d4cf3252ef       storage-provisioner
	f2010450e59e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   63a3f258785a1       busybox
	77f52399b3a56       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   14b2a7aea6c5f       coredns-76f75df574-ptq95
	d53f5dc02719d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2e2d4cf3252ef       storage-provisioner
	beec9bf02a170       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      20 minutes ago      Running             kube-proxy                1                   75aac7f7149bc       kube-proxy-64z5c
	3ae748115585f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      20 minutes ago      Running             etcd                      1                   2949cc2da8fdc       etcd-no-preload-085322
	307723cb0d2c3       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      20 minutes ago      Running             kube-scheduler            1                   19a8a31f45f0c       kube-scheduler-no-preload-085322
	bf6b71506f3a6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      20 minutes ago      Running             kube-apiserver            1                   1f366dc6c696e       kube-apiserver-no-preload-085322
	fa4073a76d415       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      20 minutes ago      Running             kube-controller-manager   1                   25807c34c111e       kube-controller-manager-no-preload-085322
	
	
	==> coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41958 - 7134 "HINFO IN 4312849831828573737.8230304474284747680. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009782799s
	
	
	==> describe nodes <==
	Name:               no-preload-085322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-085322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=no-preload-085322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_46_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:46:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-085322
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:15:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:10:43 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:10:43 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:10:43 +0000   Tue, 16 Jan 2024 23:46:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:10:43 +0000   Tue, 16 Jan 2024 23:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.183
	  Hostname:    no-preload-085322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e13f9fe9e7b4ec58d148ab9d15bf3f4
	  System UUID:                5e13f9fe-9e7b-4ec5-8d14-8ab9d15bf3f4
	  Boot ID:                    3ed7d3dd-fd9a-4acb-b2fd-65c880f13c81
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-ptq95                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-085322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-085322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-085322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-64z5c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-085322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-xbr22              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-085322 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-085322 event: Registered Node no-preload-085322 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-085322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-085322 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-085322 event: Registered Node no-preload-085322 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.275777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.689063] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136639] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.353841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.270201] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.101382] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.141693] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.110008] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.221934] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +28.198260] systemd-fstab-generator[1332]: Ignoring "noauto" for root device
	[Jan16 23:55] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] <==
	{"level":"warn","ts":"2024-01-16T23:55:32.762361Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.901233Z","time spent":"861.122988ms","remote":"127.0.0.1:58838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4715,"request content":"key:\"/registry/minions/no-preload-085322\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"860.761475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" ","response":"range_response_count:1 size:4238"}
	{"level":"info","ts":"2024-01-16T23:55:32.762514Z","caller":"traceutil/trace.go:171","msg":"trace[424613040] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22; range_end:; response_count:1; response_revision:602; }","duration":"860.779365ms","start":"2024-01-16T23:55:31.901729Z","end":"2024-01-16T23:55:32.762508Z","steps":["trace[424613040] 'agreement among raft nodes before linearized reading'  (duration: 860.743395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.762548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.901721Z","time spent":"860.822422ms","remote":"127.0.0.1:58840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4260,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-xbr22\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.050206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T23:55:32.76282Z","caller":"traceutil/trace.go:171","msg":"trace[1739164744] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:602; }","duration":"376.0783ms","start":"2024-01-16T23:55:32.386733Z","end":"2024-01-16T23:55:32.762811Z","steps":["trace[1739164744] 'agreement among raft nodes before linearized reading'  (duration: 376.036004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.762838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:32.386651Z","time spent":"376.183371ms","remote":"127.0.0.1:58792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-01-16T23:55:32.762986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"846.939988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:609"}
	{"level":"info","ts":"2024-01-16T23:55:32.763007Z","caller":"traceutil/trace.go:171","msg":"trace[184501532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:602; }","duration":"846.994352ms","start":"2024-01-16T23:55:31.916006Z","end":"2024-01-16T23:55:32.763Z","steps":["trace[184501532] 'agreement among raft nodes before linearized reading'  (duration: 846.949866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T23:55:32.763027Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T23:55:31.915995Z","time spent":"847.026871ms","remote":"127.0.0.1:58836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":631,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-01-17T00:04:55.715074Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-01-17T00:04:55.717623Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":834,"took":"1.945748ms","hash":3413872945}
	{"level":"info","ts":"2024-01-17T00:04:55.717737Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3413872945,"revision":834,"compact-revision":-1}
	{"level":"info","ts":"2024-01-17T00:09:55.721869Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2024-01-17T00:09:55.723604Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1077,"took":"1.210061ms","hash":3235814839}
	{"level":"info","ts":"2024-01-17T00:09:55.723666Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3235814839,"revision":1077,"compact-revision":834}
	{"level":"info","ts":"2024-01-17T00:14:50.916277Z","caller":"traceutil/trace.go:171","msg":"trace[1103869466] linearizableReadLoop","detail":"{readStateIndex:1840; appliedIndex:1839; }","duration":"260.515976ms","start":"2024-01-17T00:14:50.655721Z","end":"2024-01-17T00:14:50.916237Z","steps":["trace[1103869466] 'read index received'  (duration: 260.298435ms)","trace[1103869466] 'applied index is now lower than readState.Index'  (duration: 217.227µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-17T00:14:50.916568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.775989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:610"}
	{"level":"info","ts":"2024-01-17T00:14:50.916606Z","caller":"traceutil/trace.go:171","msg":"trace[1143607590] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1558; }","duration":"260.980399ms","start":"2024-01-17T00:14:50.655615Z","end":"2024-01-17T00:14:50.916595Z","steps":["trace[1143607590] 'agreement among raft nodes before linearized reading'  (duration: 260.812173ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-17T00:14:50.916801Z","caller":"traceutil/trace.go:171","msg":"trace[399779004] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"262.035516ms","start":"2024-01-17T00:14:50.654757Z","end":"2024-01-17T00:14:50.916793Z","steps":["trace[399779004] 'process raft request'  (duration: 261.312654ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-17T00:14:51.144002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.474058ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13371905363659108046 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1557 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:521 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-17T00:14:51.144121Z","caller":"traceutil/trace.go:171","msg":"trace[899235252] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"223.621621ms","start":"2024-01-17T00:14:50.920484Z","end":"2024-01-17T00:14:51.144105Z","steps":["trace[899235252] 'process raft request'  (duration: 103.912487ms)","trace[899235252] 'compare'  (duration: 119.34582ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-17T00:14:55.729294Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1320}
	{"level":"info","ts":"2024-01-17T00:14:55.730761Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1320,"took":"1.158761ms","hash":73570229}
	{"level":"info","ts":"2024-01-17T00:14:55.73082Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":73570229,"revision":1320,"compact-revision":1077}
	
	
	==> kernel <==
	 00:15:14 up 21 min,  0 users,  load average: 0.16, 0.11, 0.13
	Linux no-preload-085322 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] <==
	I0117 00:09:57.990021       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:10:57.989467       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:10:57.989582       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:57.989590       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:10:57.990851       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:10:57.990957       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:10:57.990967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:12:57.990266       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:12:57.990594       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:12:57.990660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:12:57.991493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:12:57.991555       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:12:57.992703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:14:56.995342       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:14:56.995467       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0117 00:14:57.996123       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:14:57.996203       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:14:57.996450       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:14:57.996487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:14:57.996581       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:14:57.998464       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] <==
	I0117 00:09:40.623270       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:10.003956       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:10.631707       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:40.011020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:40.639625       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:10:57.508786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="319.934µs"
	E0117 00:11:10.022058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:10.648233       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:11:11.509601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="77.363µs"
	E0117 00:11:40.029302       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:40.659927       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:10.034657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:10.669819       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:40.041526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:40.678379       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:10.051849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:10.688700       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:40.058596       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:40.697860       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:10.067463       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:10.708018       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:40.073253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:40.716674       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:15:10.084860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:15:10.726665       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] <==
	I0116 23:54:58.721175       1 server_others.go:72] "Using iptables proxy"
	I0116 23:54:58.738947       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.183"]
	I0116 23:54:58.801055       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0116 23:54:58.801091       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 23:54:58.801104       1 server_others.go:168] "Using iptables Proxier"
	I0116 23:54:58.803627       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 23:54:58.803902       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0116 23:54:58.803933       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:54:58.805905       1 config.go:188] "Starting service config controller"
	I0116 23:54:58.805944       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 23:54:58.805987       1 config.go:97] "Starting endpoint slice config controller"
	I0116 23:54:58.806012       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 23:54:58.809702       1 config.go:315] "Starting node config controller"
	I0116 23:54:58.809733       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 23:54:58.906195       1 shared_informer.go:318] Caches are synced for service config
	I0116 23:54:58.906129       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 23:54:58.909871       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] <==
	I0116 23:54:54.873211       1 serving.go:380] Generated self-signed cert in-memory
	W0116 23:54:56.939550       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:54:56.939690       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:54:56.939722       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:54:56.939807       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:54:56.992229       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0116 23:54:56.992267       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:54:56.993774       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 23:54:56.993822       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 23:54:56.994670       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 23:54:56.997464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 23:54:57.094654       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:14 UTC, ends at Wed 2024-01-17 00:15:14 UTC. --
	Jan 17 00:12:41 no-preload-085322 kubelet[1338]: E0117 00:12:41.492301    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:12:51 no-preload-085322 kubelet[1338]: E0117 00:12:51.515872    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:12:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:12:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:12:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:12:55 no-preload-085322 kubelet[1338]: E0117 00:12:55.491656    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:13:08 no-preload-085322 kubelet[1338]: E0117 00:13:08.491858    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:13:22 no-preload-085322 kubelet[1338]: E0117 00:13:22.491221    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:13:34 no-preload-085322 kubelet[1338]: E0117 00:13:34.492384    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:13:49 no-preload-085322 kubelet[1338]: E0117 00:13:49.492720    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:13:51 no-preload-085322 kubelet[1338]: E0117 00:13:51.516763    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:13:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:13:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:13:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:14:02 no-preload-085322 kubelet[1338]: E0117 00:14:02.491334    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:14:15 no-preload-085322 kubelet[1338]: E0117 00:14:15.491958    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:14:26 no-preload-085322 kubelet[1338]: E0117 00:14:26.492874    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:14:41 no-preload-085322 kubelet[1338]: E0117 00:14:41.493377    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:14:51 no-preload-085322 kubelet[1338]: E0117 00:14:51.510726    1338 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 17 00:14:51 no-preload-085322 kubelet[1338]: E0117 00:14:51.520299    1338 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:14:51 no-preload-085322 kubelet[1338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:14:51 no-preload-085322 kubelet[1338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:14:51 no-preload-085322 kubelet[1338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:14:56 no-preload-085322 kubelet[1338]: E0117 00:14:56.492294    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	Jan 17 00:15:09 no-preload-085322 kubelet[1338]: E0117 00:15:09.492282    1338 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xbr22" podUID="04d3cffb-ab03-4d0d-8524-333d64531c87"
	
	
	==> storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] <==
	I0116 23:55:29.854869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:29.874193       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:29.874272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:55:29.896125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:55:29.897624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa!
	I0116 23:55:29.896386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16b5d283-67d6-42b9-93d6-48a37a448a5d", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa became leader
	I0116 23:55:29.998040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-085322_6652c1d5-66e2-4448-8f82-bf4dac8216fa!
	
	
	==> storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] <==
	I0116 23:54:58.703944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 23:55:28.706701       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-085322 -n no-preload-085322
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-085322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xbr22
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22: exit status 1 (89.831283ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xbr22" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-085322 describe pod metrics-server-57f55c9bc5-xbr22: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (411.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (143.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0117 00:13:19.621660   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0117 00:13:31.442666   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-837871 -n embed-certs-837871
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:15:35.56227828 +0000 UTC m=+5958.777782091
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-837871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-837871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.838µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-837871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-837871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-837871 logs -n 25: (1.139049278s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-771669 image                           | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	| start   | -p newest-cni-353558 --memory=2200 --alsologtostderr   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	| addons  | enable metrics-server -p newest-cni-353558             | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-353558                                   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-353558                  | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-353558 --memory=2200 --alsologtostderr   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/17 00:15:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0117 00:15:21.357392   66213 out.go:296] Setting OutFile to fd 1 ...
	I0117 00:15:21.357533   66213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:15:21.357543   66213 out.go:309] Setting ErrFile to fd 2...
	I0117 00:15:21.357548   66213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:15:21.357745   66213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0117 00:15:21.358311   66213 out.go:303] Setting JSON to false
	I0117 00:15:21.359217   66213 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7068,"bootTime":1705443454,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0117 00:15:21.359274   66213 start.go:138] virtualization: kvm guest
	I0117 00:15:21.361652   66213 out.go:177] * [newest-cni-353558] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0117 00:15:21.362995   66213 out.go:177]   - MINIKUBE_LOCATION=17975
	I0117 00:15:21.364344   66213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0117 00:15:21.363076   66213 notify.go:220] Checking for updates...
	I0117 00:15:21.366867   66213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:15:21.368290   66213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0117 00:15:21.369607   66213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0117 00:15:21.370851   66213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0117 00:15:21.372465   66213 config.go:182] Loaded profile config "newest-cni-353558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0117 00:15:21.373095   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.373205   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.387847   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I0117 00:15:21.388221   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.388708   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.388733   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.389079   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.389251   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.389474   66213 driver.go:392] Setting default libvirt URI to qemu:///system
	I0117 00:15:21.389758   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.389794   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.403700   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0117 00:15:21.404138   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.404592   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.404618   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.404911   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.405113   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.440996   66213 out.go:177] * Using the kvm2 driver based on existing profile
	I0117 00:15:21.442550   66213 start.go:298] selected driver: kvm2
	I0117 00:15:21.442564   66213 start.go:902] validating driver "kvm2" against &{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:15:21.442704   66213 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0117 00:15:21.443363   66213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:15:21.443437   66213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0117 00:15:21.457838   66213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0117 00:15:21.458207   66213 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0117 00:15:21.458270   66213 cni.go:84] Creating CNI manager for ""
	I0117 00:15:21.458284   66213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:15:21.458293   66213 start_flags.go:321] config:
	{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:15:21.458459   66213 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:15:21.460163   66213 out.go:177] * Starting control plane node newest-cni-353558 in cluster newest-cni-353558
	I0117 00:15:21.461304   66213 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0117 00:15:21.461340   66213 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0117 00:15:21.461350   66213 cache.go:56] Caching tarball of preloaded images
	I0117 00:15:21.461424   66213 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0117 00:15:21.461434   66213 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0117 00:15:21.461539   66213 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json ...
	I0117 00:15:21.461730   66213 start.go:365] acquiring machines lock for newest-cni-353558: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0117 00:15:21.461773   66213 start.go:369] acquired machines lock for "newest-cni-353558" in 25.811µs
	I0117 00:15:21.461786   66213 start.go:96] Skipping create...Using existing machine configuration
	I0117 00:15:21.461792   66213 fix.go:54] fixHost starting: 
	I0117 00:15:21.462048   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.462076   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.475876   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0117 00:15:21.476349   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.476782   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.476805   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.477134   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.477325   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.477483   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetState
	I0117 00:15:21.479041   66213 fix.go:102] recreateIfNeeded on newest-cni-353558: state=Stopped err=<nil>
	I0117 00:15:21.479081   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	W0117 00:15:21.479228   66213 fix.go:128] unexpected machine state, will restart: <nil>
	I0117 00:15:21.481226   66213 out.go:177] * Restarting existing kvm2 VM for "newest-cni-353558" ...
	I0117 00:15:21.482392   66213 main.go:141] libmachine: (newest-cni-353558) Calling .Start
	I0117 00:15:21.482546   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring networks are active...
	I0117 00:15:21.483177   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring network default is active
	I0117 00:15:21.483402   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring network mk-newest-cni-353558 is active
	I0117 00:15:21.483918   66213 main.go:141] libmachine: (newest-cni-353558) Getting domain xml...
	I0117 00:15:21.484666   66213 main.go:141] libmachine: (newest-cni-353558) Creating domain...
	I0117 00:15:22.697137   66213 main.go:141] libmachine: (newest-cni-353558) Waiting to get IP...
	I0117 00:15:22.697981   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:22.698421   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:22.698502   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:22.698407   66248 retry.go:31] will retry after 210.660719ms: waiting for machine to come up
	I0117 00:15:22.910957   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:22.911455   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:22.911479   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:22.911396   66248 retry.go:31] will retry after 296.163069ms: waiting for machine to come up
	I0117 00:15:23.208713   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.209119   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.209141   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.209078   66248 retry.go:31] will retry after 325.581343ms: waiting for machine to come up
	I0117 00:15:23.536369   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.536892   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.536922   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.536842   66248 retry.go:31] will retry after 368.531657ms: waiting for machine to come up
	I0117 00:15:23.907428   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.907910   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.907940   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.907860   66248 retry.go:31] will retry after 533.164037ms: waiting for machine to come up
	I0117 00:15:24.442588   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:24.443084   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:24.443111   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:24.443002   66248 retry.go:31] will retry after 631.104771ms: waiting for machine to come up
	I0117 00:15:25.075312   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:25.075794   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:25.075817   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:25.075752   66248 retry.go:31] will retry after 1.042234653s: waiting for machine to come up
	I0117 00:15:26.119731   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:26.120298   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:26.120318   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:26.120244   66248 retry.go:31] will retry after 895.099913ms: waiting for machine to come up
	I0117 00:15:27.016803   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:27.017245   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:27.017273   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:27.017206   66248 retry.go:31] will retry after 1.148589522s: waiting for machine to come up
	I0117 00:15:28.167493   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:28.168019   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:28.168050   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:28.167954   66248 retry.go:31] will retry after 1.409133527s: waiting for machine to come up
	I0117 00:15:29.579361   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:29.579876   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:29.579910   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:29.579800   66248 retry.go:31] will retry after 2.227440478s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:33 UTC, ends at Wed 2024-01-17 00:15:36 UTC. --
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.226431491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450536226406896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f1775052-2f19-4e63-970c-efe7864c1581 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.226975219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ac593822-d832-4797-9e14-53be10740dc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.227019695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ac593822-d832-4797-9e14-53be10740dc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.227258891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ac593822-d832-4797-9e14-53be10740dc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.265781121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e29056ad-2ccb-4d45-b9f7-9fe537ce7c77 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.265861414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e29056ad-2ccb-4d45-b9f7-9fe537ce7c77 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.266999473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=15d4960b-b138-4658-8a41-4841b6255093 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.267497740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450536267483261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=15d4960b-b138-4658-8a41-4841b6255093 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.268007801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2d401814-ef51-4582-ae85-351c7d66e7de name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.268075777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2d401814-ef51-4582-ae85-351c7d66e7de name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.268341577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2d401814-ef51-4582-ae85-351c7d66e7de name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.306851956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=07c16fcb-fb77-4be3-80e1-cd1181a547ca name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.306988644Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=07c16fcb-fb77-4be3-80e1-cd1181a547ca name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.308645544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bb469ee7-f00a-4436-8cfa-25f190a47f4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.309028591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450536309012245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bb469ee7-f00a-4436-8cfa-25f190a47f4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.309764407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2b97c0f9-5c17-4254-aac4-d2e3cd39a6a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.309833251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2b97c0f9-5c17-4254-aac4-d2e3cd39a6a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.310004074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2b97c0f9-5c17-4254-aac4-d2e3cd39a6a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.345190696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8335f033-fffe-466e-8abe-52f8890af4d8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.345266480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8335f033-fffe-466e-8abe-52f8890af4d8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.346704880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7aaf0b33-6765-4499-9f4b-69bae2645281 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.347278943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450536347262512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7aaf0b33-6765-4499-9f4b-69bae2645281 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.347883128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc24fad9-924d-4b31-9112-3c66de850a18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.347993804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc24fad9-924d-4b31-9112-3c66de850a18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:36 embed-certs-837871 crio[720]: time="2024-01-17 00:15:36.348267046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0,PodSandboxId:b4b8bdb35468aeaca574e0fa4aedb7045273539da2e55d1436b15a9232e6ff07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449598041358649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892c3a03-f9c9-46de-967a-6d2b9ea5c7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 835048b9,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd,PodSandboxId:71faeee9f3dba82747438c2c6625ac8ce83ea862c7804ee73faa5fa7dd6af6da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449597374407516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n2l6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85153ef8-2cfa-4fce-82a5-b66e94c2f400,},Annotations:map[string]string{io.kubernetes.container.hash: 91392bdb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743,PodSandboxId:fafe333a6c9592de2907afb0f026b6a3feda85a60be7e2e1558abb2084773a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449596854857541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-52xk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4fac6c4-b902-4f0f-9999-b212b64c94ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3ef361d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3,PodSandboxId:0dc299c074c7413ec9e9efad481bf7b033a10dfa5da58572c88d4770b7baa6e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449574623405332,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f2841e8e2815a92a1cffd5b7aa0a9c57,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d,PodSandboxId:31ff8832d6a32e6a2b2e6b726de7f469fc5ea4d965449f6d274d9b5061cb2575,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449574429046396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701d6d9562080acaaa87981005b8e98,},Annotations:
map[string]string{io.kubernetes.container.hash: 44b0dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44,PodSandboxId:f0518e04f9bf130b29a9d0b0fda55efda868019c0bd84b0b7afa42fecca65651,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449574265027680,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112d67c27002b2f7b627ec
dfcdf76d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699,PodSandboxId:ff56eb5c7469003f43bf9b4538f94498ff4b5f9c78e7773b5c658ea2a6858bcc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449574067871258,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-837871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848c09fd32b21929af686f03a3878c3,
},Annotations:map[string]string{io.kubernetes.container.hash: 621201c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc24fad9-924d-4b31-9112-3c66de850a18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	304b75257b98a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   b4b8bdb35468a       storage-provisioner
	85a871eaadf52       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   71faeee9f3dba       kube-proxy-n2l6s
	fbf799dc2641e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   fafe333a6c959       coredns-5dd5756b68-52xk7
	724ffd940ff03       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   0dc299c074c74       kube-scheduler-embed-certs-837871
	c4895b3e5cab3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   31ff8832d6a32       etcd-embed-certs-837871
	caa2304d7d208       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   f0518e04f9bf1       kube-controller-manager-embed-certs-837871
	d76dfa44d72e3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   ff56eb5c74690       kube-apiserver-embed-certs-837871
	
	
	==> coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35152 - 63620 "HINFO IN 2176552816251847159.3970859914954375329. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009871044s
	
	
	==> describe nodes <==
	Name:               embed-certs-837871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-837871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=embed-certs-837871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:59:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-837871
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:15:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:15:20 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:15:20 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:15:20 +0000   Tue, 16 Jan 2024 23:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:15:20 +0000   Tue, 16 Jan 2024 23:59:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    embed-certs-837871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bfd53b8953e40fd928bb56312c38f54
	  System UUID:                3bfd53b8-953e-40fd-928b-b56312c38f54
	  Boot ID:                    4f31bcd8-c63c-45df-a685-5ed341fe0ce4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-52xk7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-837871                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-837871             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-837871    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-n2l6s                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-837871             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-6rsbl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-837871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-837871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-837871 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node embed-certs-837871 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node embed-certs-837871 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-837871 event: Registered Node embed-certs-837871 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063248] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.353809] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.958677] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133426] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.417545] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.377031] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.106055] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.130034] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.116010] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.203868] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[ +16.825392] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[Jan16 23:55] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 23:59] systemd-fstab-generator[3481]: Ignoring "noauto" for root device
	[  +9.274995] systemd-fstab-generator[3839]: Ignoring "noauto" for root device
	[ +13.201502] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] <==
	{"level":"info","ts":"2024-01-16T23:59:37.012755Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:37.012744Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.029505Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5e6abf1d35eec4c5","local-member-id":"9e3e2863ac888927","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.02967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:37.032448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-17T00:09:37.047024Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-01-17T00:09:37.050244Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":679,"took":"2.531983ms","hash":582418373}
	{"level":"info","ts":"2024-01-17T00:09:37.050348Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":582418373,"revision":679,"compact-revision":-1}
	{"level":"info","ts":"2024-01-17T00:14:37.058624Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2024-01-17T00:14:37.060735Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"1.691711ms","hash":2569472480}
	{"level":"info","ts":"2024-01-17T00:14:37.060876Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2569472480,"revision":922,"compact-revision":679}
	{"level":"warn","ts":"2024-01-17T00:14:50.74948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.655316ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-17T00:14:50.749778Z","caller":"traceutil/trace.go:171","msg":"trace[1863617974] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1176; }","duration":"322.049546ms","start":"2024-01-17T00:14:50.427711Z","end":"2024-01-17T00:14:50.74976Z","steps":["trace[1863617974] 'range keys from in-memory index tree'  (duration: 321.611813ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-17T00:14:50.749832Z","caller":"traceutil/trace.go:171","msg":"trace[930695602] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"438.497916ms","start":"2024-01-17T00:14:50.311323Z","end":"2024-01-17T00:14:50.749821Z","steps":["trace[930695602] 'process raft request'  (duration: 437.948245ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-17T00:14:50.749584Z","caller":"traceutil/trace.go:171","msg":"trace[1381549144] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"410.825268ms","start":"2024-01-17T00:14:50.338709Z","end":"2024-01-17T00:14:50.749534Z","steps":["trace[1381549144] 'read index received'  (duration: 410.521111ms)","trace[1381549144] 'applied index is now lower than readState.Index'  (duration: 303.227µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-17T00:14:50.749797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"411.086197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-01-17T00:14:50.751045Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-17T00:14:50.311299Z","time spent":"438.623812ms","remote":"127.0.0.1:51272","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.226\" mod_revision:1169 > success:<request_put:<key:\"/registry/masterleases/192.168.39.226\" value_size:67 lease:659650990547871444 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.226\" > >"}
	{"level":"info","ts":"2024-01-17T00:14:50.751095Z","caller":"traceutil/trace.go:171","msg":"trace[600119965] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1177; }","duration":"412.393894ms","start":"2024-01-17T00:14:50.338678Z","end":"2024-01-17T00:14:50.751071Z","steps":["trace[600119965] 'agreement among raft nodes before linearized reading'  (duration: 410.963777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-17T00:14:50.751298Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-17T00:14:50.33866Z","time spent":"412.623117ms","remote":"127.0.0.1:51306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-01-17T00:14:50.751453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.438661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-17T00:14:50.751506Z","caller":"traceutil/trace.go:171","msg":"trace[1747098811] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1177; }","duration":"174.480652ms","start":"2024-01-17T00:14:50.577007Z","end":"2024-01-17T00:14:50.751487Z","steps":["trace[1747098811] 'agreement among raft nodes before linearized reading'  (duration: 174.420786ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-17T00:14:50.751535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.997757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-17T00:14:50.751586Z","caller":"traceutil/trace.go:171","msg":"trace[1100811694] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1177; }","duration":"131.049946ms","start":"2024-01-17T00:14:50.620529Z","end":"2024-01-17T00:14:50.751579Z","steps":["trace[1100811694] 'agreement among raft nodes before linearized reading'  (duration: 130.965886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-17T00:14:51.024883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.186814ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9883023027402647261 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-837871\" mod_revision:1170 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-837871\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-837871\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-17T00:14:51.025291Z","caller":"traceutil/trace.go:171","msg":"trace[1569665479] transaction","detail":"{read_only:false; response_revision:1178; number_of_response:1; }","duration":"247.40607ms","start":"2024-01-17T00:14:50.777824Z","end":"2024-01-17T00:14:51.02523Z","steps":["trace[1569665479] 'process raft request'  (duration: 80.421491ms)","trace[1569665479] 'compare'  (duration: 166.050841ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:15:36 up 21 min,  0 users,  load average: 0.11, 0.14, 0.20
	Linux embed-certs-837871 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] <==
	I0117 00:12:38.356605       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:12:39.460774       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:12:39.460897       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:12:39.460951       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:12:39.460984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:12:39.461203       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:12:39.462171       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:13:38.355784       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0117 00:14:38.356064       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:14:38.462895       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:14:38.463016       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:14:38.463912       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:14:39.463472       1 handler_proxy.go:93] no RequestInfo found in the context
	W0117 00:14:39.463510       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:14:39.463646       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:14:39.463655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0117 00:14:39.463735       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:14:39.465022       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:14:50.751986       1 trace.go:236] Trace[1186782485]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.226,type:*v1.Endpoints,resource:apiServerIPInfo (17-Jan-2024 00:14:50.179) (total time: 572ms):
	Trace[1186782485]: ---"Transaction prepared" 129ms (00:14:50.310)
	Trace[1186782485]: ---"Txn call completed" 441ms (00:14:50.751)
	Trace[1186782485]: [572.233175ms] [572.233175ms] END
	
	
	==> kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] <==
	I0117 00:09:54.295445       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:23.793046       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:24.304227       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:10:42.182910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="267.297µs"
	E0117 00:10:53.799772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:54.319617       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:10:55.173174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.442µs"
	E0117 00:11:23.806083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:24.327094       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:11:53.813050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:54.337298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:23.819014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:24.346563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:53.825806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:54.362887       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:23.834869       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:24.370812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:53.841843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:54.380468       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:23.848167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:24.390650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:53.855428       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:54.408253       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:15:23.862170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:15:24.417092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] <==
	I0116 23:59:57.991179       1 server_others.go:69] "Using iptables proxy"
	I0116 23:59:58.114054       1 node.go:141] Successfully retrieved node IP: 192.168.39.226
	I0116 23:59:58.356357       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 23:59:58.375293       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 23:59:58.410754       1 server_others.go:152] "Using iptables Proxier"
	I0116 23:59:58.412039       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 23:59:58.412497       1 server.go:846] "Version info" version="v1.28.4"
	I0116 23:59:58.412540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 23:59:58.415063       1 config.go:315] "Starting node config controller"
	I0116 23:59:58.415252       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 23:59:58.416222       1 config.go:97] "Starting endpoint slice config controller"
	I0116 23:59:58.416338       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 23:59:58.416536       1 config.go:188] "Starting service config controller"
	I0116 23:59:58.416572       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 23:59:58.516051       1 shared_informer.go:318] Caches are synced for node config
	I0116 23:59:58.517252       1 shared_informer.go:318] Caches are synced for service config
	I0116 23:59:58.517271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] <==
	W0116 23:59:38.513758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:59:38.513771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 23:59:38.517285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:59:38.517326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 23:59:38.517398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:38.517411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.327700       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 23:59:39.327815       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:59:39.494077       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:59:39.494204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 23:59:39.516347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:59:39.516371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 23:59:39.574954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:59:39.575065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 23:59:39.584388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:59:39.584488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 23:59:39.680231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:39.680346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.704526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:59:39.704647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 23:59:39.730575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:59:39.730710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 23:59:39.744943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 23:59:39.745062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:59:41.282961       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:33 UTC, ends at Wed 2024-01-17 00:15:36 UTC. --
	Jan 17 00:12:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:12:53 embed-certs-837871 kubelet[3846]: E0117 00:12:53.148949    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:13:05 embed-certs-837871 kubelet[3846]: E0117 00:13:05.148922    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:13:20 embed-certs-837871 kubelet[3846]: E0117 00:13:20.149051    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:13:34 embed-certs-837871 kubelet[3846]: E0117 00:13:34.149555    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:13:42 embed-certs-837871 kubelet[3846]: E0117 00:13:42.233396    3846 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:13:42 embed-certs-837871 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:13:42 embed-certs-837871 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:13:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:13:48 embed-certs-837871 kubelet[3846]: E0117 00:13:48.149240    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:14:02 embed-certs-837871 kubelet[3846]: E0117 00:14:02.148949    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:14:17 embed-certs-837871 kubelet[3846]: E0117 00:14:17.148700    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:14:31 embed-certs-837871 kubelet[3846]: E0117 00:14:31.148961    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:14:42 embed-certs-837871 kubelet[3846]: E0117 00:14:42.237425    3846 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:14:42 embed-certs-837871 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:14:42 embed-certs-837871 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:14:42 embed-certs-837871 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:14:42 embed-certs-837871 kubelet[3846]: E0117 00:14:42.327988    3846 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 17 00:14:45 embed-certs-837871 kubelet[3846]: E0117 00:14:45.149644    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:15:00 embed-certs-837871 kubelet[3846]: E0117 00:15:00.149838    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:15:15 embed-certs-837871 kubelet[3846]: E0117 00:15:15.150267    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	Jan 17 00:15:30 embed-certs-837871 kubelet[3846]: E0117 00:15:30.160166    3846 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 17 00:15:30 embed-certs-837871 kubelet[3846]: E0117 00:15:30.160240    3846 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 17 00:15:30 embed-certs-837871 kubelet[3846]: E0117 00:15:30.160477    3846 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-phbtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6rsbl_kube-system(c3af6965-7851-4a08-8c60-78fefb523e9d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:15:30 embed-certs-837871 kubelet[3846]: E0117 00:15:30.160527    3846 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6rsbl" podUID="c3af6965-7851-4a08-8c60-78fefb523e9d"
	
	
	==> storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] <==
	I0116 23:59:58.185029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:59:58.198963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:59:58.200559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:59:58.221321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:59:58.226878       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386!
	I0116 23:59:58.246319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfde273a-d420-49e4-987f-a4fcc5a0f676", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386 became leader
	I0116 23:59:58.328059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-837871_4d94f53a-2d4d-4403-a544-da32a34a5386!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-837871 -n embed-certs-837871
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-837871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6rsbl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl: exit status 1 (62.864161ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6rsbl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-837871 describe pod metrics-server-57f55c9bc5-6rsbl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (143.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (125.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0117 00:13:38.241121   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0117 00:13:50.186317   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-17 00:15:40.77178941 +0000 UTC m=+5963.987293225
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-967325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.04µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-967325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-967325 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-967325 logs -n 25: (1.448321676s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-771669 image                           | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	| start   | -p newest-cni-353558 --memory=2200 --alsologtostderr   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	| addons  | enable metrics-server -p newest-cni-353558             | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-353558                                   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-353558                  | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-353558 --memory=2200 --alsologtostderr   | newest-cni-353558            | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 17 Jan 24 00:15 UTC | 17 Jan 24 00:15 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/17 00:15:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0117 00:15:21.357392   66213 out.go:296] Setting OutFile to fd 1 ...
	I0117 00:15:21.357533   66213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:15:21.357543   66213 out.go:309] Setting ErrFile to fd 2...
	I0117 00:15:21.357548   66213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0117 00:15:21.357745   66213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0117 00:15:21.358311   66213 out.go:303] Setting JSON to false
	I0117 00:15:21.359217   66213 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7068,"bootTime":1705443454,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0117 00:15:21.359274   66213 start.go:138] virtualization: kvm guest
	I0117 00:15:21.361652   66213 out.go:177] * [newest-cni-353558] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0117 00:15:21.362995   66213 out.go:177]   - MINIKUBE_LOCATION=17975
	I0117 00:15:21.364344   66213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0117 00:15:21.363076   66213 notify.go:220] Checking for updates...
	I0117 00:15:21.366867   66213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:15:21.368290   66213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0117 00:15:21.369607   66213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0117 00:15:21.370851   66213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0117 00:15:21.372465   66213 config.go:182] Loaded profile config "newest-cni-353558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0117 00:15:21.373095   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.373205   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.387847   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I0117 00:15:21.388221   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.388708   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.388733   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.389079   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.389251   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.389474   66213 driver.go:392] Setting default libvirt URI to qemu:///system
	I0117 00:15:21.389758   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.389794   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.403700   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0117 00:15:21.404138   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.404592   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.404618   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.404911   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.405113   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.440996   66213 out.go:177] * Using the kvm2 driver based on existing profile
	I0117 00:15:21.442550   66213 start.go:298] selected driver: kvm2
	I0117 00:15:21.442564   66213 start.go:902] validating driver "kvm2" against &{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:15:21.442704   66213 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0117 00:15:21.443363   66213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:15:21.443437   66213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0117 00:15:21.457838   66213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0117 00:15:21.458207   66213 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0117 00:15:21.458270   66213 cni.go:84] Creating CNI manager for ""
	I0117 00:15:21.458284   66213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:15:21.458293   66213 start_flags.go:321] config:
	{Name:newest-cni-353558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-353558 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0117 00:15:21.458459   66213 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0117 00:15:21.460163   66213 out.go:177] * Starting control plane node newest-cni-353558 in cluster newest-cni-353558
	I0117 00:15:21.461304   66213 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0117 00:15:21.461340   66213 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0117 00:15:21.461350   66213 cache.go:56] Caching tarball of preloaded images
	I0117 00:15:21.461424   66213 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0117 00:15:21.461434   66213 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0117 00:15:21.461539   66213 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json ...
	I0117 00:15:21.461730   66213 start.go:365] acquiring machines lock for newest-cni-353558: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0117 00:15:21.461773   66213 start.go:369] acquired machines lock for "newest-cni-353558" in 25.811µs
	I0117 00:15:21.461786   66213 start.go:96] Skipping create...Using existing machine configuration
	I0117 00:15:21.461792   66213 fix.go:54] fixHost starting: 
	I0117 00:15:21.462048   66213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:15:21.462076   66213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:15:21.475876   66213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0117 00:15:21.476349   66213 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:15:21.476782   66213 main.go:141] libmachine: Using API Version  1
	I0117 00:15:21.476805   66213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:15:21.477134   66213 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:15:21.477325   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:21.477483   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetState
	I0117 00:15:21.479041   66213 fix.go:102] recreateIfNeeded on newest-cni-353558: state=Stopped err=<nil>
	I0117 00:15:21.479081   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	W0117 00:15:21.479228   66213 fix.go:128] unexpected machine state, will restart: <nil>
	I0117 00:15:21.481226   66213 out.go:177] * Restarting existing kvm2 VM for "newest-cni-353558" ...
	I0117 00:15:21.482392   66213 main.go:141] libmachine: (newest-cni-353558) Calling .Start
	I0117 00:15:21.482546   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring networks are active...
	I0117 00:15:21.483177   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring network default is active
	I0117 00:15:21.483402   66213 main.go:141] libmachine: (newest-cni-353558) Ensuring network mk-newest-cni-353558 is active
	I0117 00:15:21.483918   66213 main.go:141] libmachine: (newest-cni-353558) Getting domain xml...
	I0117 00:15:21.484666   66213 main.go:141] libmachine: (newest-cni-353558) Creating domain...
	I0117 00:15:22.697137   66213 main.go:141] libmachine: (newest-cni-353558) Waiting to get IP...
	I0117 00:15:22.697981   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:22.698421   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:22.698502   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:22.698407   66248 retry.go:31] will retry after 210.660719ms: waiting for machine to come up
	I0117 00:15:22.910957   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:22.911455   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:22.911479   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:22.911396   66248 retry.go:31] will retry after 296.163069ms: waiting for machine to come up
	I0117 00:15:23.208713   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.209119   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.209141   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.209078   66248 retry.go:31] will retry after 325.581343ms: waiting for machine to come up
	I0117 00:15:23.536369   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.536892   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.536922   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.536842   66248 retry.go:31] will retry after 368.531657ms: waiting for machine to come up
	I0117 00:15:23.907428   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:23.907910   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:23.907940   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:23.907860   66248 retry.go:31] will retry after 533.164037ms: waiting for machine to come up
	I0117 00:15:24.442588   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:24.443084   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:24.443111   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:24.443002   66248 retry.go:31] will retry after 631.104771ms: waiting for machine to come up
	I0117 00:15:25.075312   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:25.075794   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:25.075817   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:25.075752   66248 retry.go:31] will retry after 1.042234653s: waiting for machine to come up
	I0117 00:15:26.119731   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:26.120298   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:26.120318   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:26.120244   66248 retry.go:31] will retry after 895.099913ms: waiting for machine to come up
	I0117 00:15:27.016803   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:27.017245   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:27.017273   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:27.017206   66248 retry.go:31] will retry after 1.148589522s: waiting for machine to come up
	I0117 00:15:28.167493   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:28.168019   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:28.168050   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:28.167954   66248 retry.go:31] will retry after 1.409133527s: waiting for machine to come up
	I0117 00:15:29.579361   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:29.579876   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:29.579910   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:29.579800   66248 retry.go:31] will retry after 2.227440478s: waiting for machine to come up
	I0117 00:15:31.808791   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:31.809412   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:31.809441   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:31.809345   66248 retry.go:31] will retry after 2.41037027s: waiting for machine to come up
	I0117 00:15:34.221118   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:34.221585   66213 main.go:141] libmachine: (newest-cni-353558) DBG | unable to find current IP address of domain newest-cni-353558 in network mk-newest-cni-353558
	I0117 00:15:34.221619   66213 main.go:141] libmachine: (newest-cni-353558) DBG | I0117 00:15:34.221536   66248 retry.go:31] will retry after 4.214865027s: waiting for machine to come up
	I0117 00:15:38.437581   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.438054   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has current primary IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.438075   66213 main.go:141] libmachine: (newest-cni-353558) Found IP for machine: 192.168.72.238
	I0117 00:15:38.438089   66213 main.go:141] libmachine: (newest-cni-353558) Reserving static IP address...
	I0117 00:15:38.438680   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "newest-cni-353558", mac: "52:54:00:54:c2:59", ip: "192.168.72.238"} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.438722   66213 main.go:141] libmachine: (newest-cni-353558) Reserved static IP address: 192.168.72.238
	I0117 00:15:38.438745   66213 main.go:141] libmachine: (newest-cni-353558) DBG | skip adding static IP to network mk-newest-cni-353558 - found existing host DHCP lease matching {name: "newest-cni-353558", mac: "52:54:00:54:c2:59", ip: "192.168.72.238"}
	I0117 00:15:38.438777   66213 main.go:141] libmachine: (newest-cni-353558) Waiting for SSH to be available...
	I0117 00:15:38.438794   66213 main.go:141] libmachine: (newest-cni-353558) DBG | Getting to WaitForSSH function...
	I0117 00:15:38.440826   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.441326   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.441356   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.441463   66213 main.go:141] libmachine: (newest-cni-353558) DBG | Using SSH client type: external
	I0117 00:15:38.441489   66213 main.go:141] libmachine: (newest-cni-353558) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa (-rw-------)
	I0117 00:15:38.441517   66213 main.go:141] libmachine: (newest-cni-353558) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0117 00:15:38.441526   66213 main.go:141] libmachine: (newest-cni-353558) DBG | About to run SSH command:
	I0117 00:15:38.441540   66213 main.go:141] libmachine: (newest-cni-353558) DBG | exit 0
	I0117 00:15:38.529997   66213 main.go:141] libmachine: (newest-cni-353558) DBG | SSH cmd err, output: <nil>: 
	I0117 00:15:38.530409   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetConfigRaw
	I0117 00:15:38.531078   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:15:38.533459   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.533812   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.533848   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.534113   66213 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/newest-cni-353558/config.json ...
	I0117 00:15:38.534359   66213 machine.go:88] provisioning docker machine ...
	I0117 00:15:38.534384   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:38.534608   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:15:38.534789   66213 buildroot.go:166] provisioning hostname "newest-cni-353558"
	I0117 00:15:38.534821   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:15:38.534971   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:38.537227   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.537514   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.537530   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.537722   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:38.537943   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:38.538098   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:38.538234   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:38.538414   66213 main.go:141] libmachine: Using SSH client type: native
	I0117 00:15:38.538799   66213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:15:38.538815   66213 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-353558 && echo "newest-cni-353558" | sudo tee /etc/hostname
	I0117 00:15:38.670413   66213 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-353558
	
	I0117 00:15:38.670448   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:38.673360   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.673842   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.673917   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.674091   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:38.674300   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:38.674523   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:38.674669   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:38.674879   66213 main.go:141] libmachine: Using SSH client type: native
	I0117 00:15:38.675389   66213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:15:38.675419   66213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-353558' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-353558/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-353558' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0117 00:15:38.801821   66213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0117 00:15:38.801861   66213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0117 00:15:38.801885   66213 buildroot.go:174] setting up certificates
	I0117 00:15:38.801898   66213 provision.go:83] configureAuth start
	I0117 00:15:38.801915   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetMachineName
	I0117 00:15:38.802199   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:15:38.804773   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.805144   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.805173   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.805323   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:38.807699   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.807999   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:38.808035   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:38.808161   66213 provision.go:138] copyHostCerts
	I0117 00:15:38.808214   66213 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0117 00:15:38.808227   66213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0117 00:15:38.808287   66213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0117 00:15:38.808406   66213 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0117 00:15:38.808415   66213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0117 00:15:38.808442   66213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0117 00:15:38.808513   66213 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0117 00:15:38.808520   66213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0117 00:15:38.808541   66213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0117 00:15:38.808594   66213 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.newest-cni-353558 san=[192.168.72.238 192.168.72.238 localhost 127.0.0.1 minikube newest-cni-353558]
	I0117 00:15:39.025134   66213 provision.go:172] copyRemoteCerts
	I0117 00:15:39.025197   66213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0117 00:15:39.025220   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.028330   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.028756   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.028787   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.029052   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.029250   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.029397   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.029572   66213 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:15:39.119298   66213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0117 00:15:39.141015   66213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0117 00:15:39.161973   66213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0117 00:15:39.183188   66213 provision.go:86] duration metric: configureAuth took 381.261842ms
	I0117 00:15:39.183224   66213 buildroot.go:189] setting minikube options for container-runtime
	I0117 00:15:39.183422   66213 config.go:182] Loaded profile config "newest-cni-353558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0117 00:15:39.183495   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.186184   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.186581   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.186610   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.186812   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.187060   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.187263   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.187427   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.187637   66213 main.go:141] libmachine: Using SSH client type: native
	I0117 00:15:39.187968   66213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:15:39.187990   66213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0117 00:15:39.498416   66213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0117 00:15:39.498448   66213 machine.go:91] provisioned docker machine in 964.072183ms
	I0117 00:15:39.498462   66213 start.go:300] post-start starting for "newest-cni-353558" (driver="kvm2")
	I0117 00:15:39.498480   66213 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0117 00:15:39.498520   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:39.498861   66213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0117 00:15:39.498891   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.501723   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.502087   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.502191   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.502252   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.502489   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.502693   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.502861   66213 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:15:39.597235   66213 ssh_runner.go:195] Run: cat /etc/os-release
	I0117 00:15:39.601193   66213 info.go:137] Remote host: Buildroot 2021.02.12
	I0117 00:15:39.601220   66213 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0117 00:15:39.601285   66213 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0117 00:15:39.601387   66213 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0117 00:15:39.601500   66213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0117 00:15:39.611147   66213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0117 00:15:39.631594   66213 start.go:303] post-start completed in 133.118906ms
	I0117 00:15:39.631613   66213 fix.go:56] fixHost completed within 18.169820273s
	I0117 00:15:39.631632   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.634025   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.634410   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.634442   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.634669   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.634837   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.635003   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.635160   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.635306   66213 main.go:141] libmachine: Using SSH client type: native
	I0117 00:15:39.635618   66213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0117 00:15:39.635630   66213 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0117 00:15:39.754955   66213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705450539.708672758
	
	I0117 00:15:39.754978   66213 fix.go:206] guest clock: 1705450539.708672758
	I0117 00:15:39.754985   66213 fix.go:219] Guest: 2024-01-17 00:15:39.708672758 +0000 UTC Remote: 2024-01-17 00:15:39.631616357 +0000 UTC m=+18.325855225 (delta=77.056401ms)
	I0117 00:15:39.755008   66213 fix.go:190] guest clock delta is within tolerance: 77.056401ms
	I0117 00:15:39.755013   66213 start.go:83] releasing machines lock for "newest-cni-353558", held for 18.293231305s
	I0117 00:15:39.755030   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:39.755317   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:15:39.757785   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.758154   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.758188   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.758375   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:39.758912   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:39.759094   66213 main.go:141] libmachine: (newest-cni-353558) Calling .DriverName
	I0117 00:15:39.759216   66213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0117 00:15:39.759270   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.759310   66213 ssh_runner.go:195] Run: cat /version.json
	I0117 00:15:39.759338   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHHostname
	I0117 00:15:39.761942   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.762269   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.762302   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.762351   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.762434   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.762613   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.762713   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:39.762742   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:39.762785   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.762930   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHPort
	I0117 00:15:39.762955   66213 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:15:39.763075   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHKeyPath
	I0117 00:15:39.763235   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetSSHUsername
	I0117 00:15:39.763358   66213 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/newest-cni-353558/id_rsa Username:docker}
	I0117 00:15:39.846682   66213 ssh_runner.go:195] Run: systemctl --version
	I0117 00:15:39.878217   66213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0117 00:15:40.023044   66213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0117 00:15:40.029042   66213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0117 00:15:40.029123   66213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0117 00:15:40.042833   66213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0117 00:15:40.042864   66213 start.go:475] detecting cgroup driver to use...
	I0117 00:15:40.042961   66213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0117 00:15:40.055814   66213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0117 00:15:40.067200   66213 docker.go:217] disabling cri-docker service (if available) ...
	I0117 00:15:40.067252   66213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0117 00:15:40.081140   66213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0117 00:15:40.093345   66213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0117 00:15:40.198272   66213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0117 00:15:40.315072   66213 docker.go:233] disabling docker service ...
	I0117 00:15:40.315148   66213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0117 00:15:40.328541   66213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0117 00:15:40.341271   66213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0117 00:15:40.459790   66213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0117 00:15:40.573549   66213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0117 00:15:40.586921   66213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0117 00:15:40.604058   66213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0117 00:15:40.604115   66213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:15:40.612740   66213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0117 00:15:40.612798   66213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:15:40.621480   66213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:15:40.629854   66213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0117 00:15:40.638521   66213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0117 00:15:40.648029   66213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0117 00:15:40.655609   66213 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0117 00:15:40.655656   66213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0117 00:15:40.668070   66213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0117 00:15:40.677738   66213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0117 00:15:40.791787   66213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0117 00:15:40.959339   66213 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0117 00:15:40.959409   66213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0117 00:15:40.966926   66213 start.go:543] Will wait 60s for crictl version
	I0117 00:15:40.966975   66213 ssh_runner.go:195] Run: which crictl
	I0117 00:15:40.970419   66213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0117 00:15:41.014204   66213 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0117 00:15:41.014294   66213 ssh_runner.go:195] Run: crio --version
	I0117 00:15:41.061409   66213 ssh_runner.go:195] Run: crio --version
	I0117 00:15:41.111223   66213 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0117 00:15:41.112487   66213 main.go:141] libmachine: (newest-cni-353558) Calling .GetIP
	I0117 00:15:41.115189   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:41.115550   66213 main.go:141] libmachine: (newest-cni-353558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:c2:59", ip: ""} in network mk-newest-cni-353558: {Iface:virbr4 ExpiryTime:2024-01-17 01:14:32 +0000 UTC Type:0 Mac:52:54:00:54:c2:59 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:newest-cni-353558 Clientid:01:52:54:00:54:c2:59}
	I0117 00:15:41.115583   66213 main.go:141] libmachine: (newest-cni-353558) DBG | domain newest-cni-353558 has defined IP address 192.168.72.238 and MAC address 52:54:00:54:c2:59 in network mk-newest-cni-353558
	I0117 00:15:41.115773   66213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0117 00:15:41.120230   66213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0117 00:15:41.134279   66213 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:54:53 UTC, ends at Wed 2024-01-17 00:15:41 UTC. --
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.578581700Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ca1859fa-3d3d-42e3-8e25-bc7ad078338e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449619942893885,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-17T00:00:19.606719482Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:415d922d2285bad82e7490ec1f04bf61f340ed1351eea265c4597b8261afb415,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-dqkll,Uid:7120ca9d-d404-47b7-90d9-3e2609c8b60b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449619806149192,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-dqkll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7120ca9d-d404-47b7-90d9-3
e2609c8b60b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-17T00:00:19.474018738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-gtx6b,Uid:492a64a7-b9b2-4254-a59c-26feeabeb822,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449617085925675,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-17T00:00:16.752957643Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&PodSandboxMetadata{Name:kube-proxy-2z6bl,Uid:230eb872-e4ee-4b
c3-b7c4-bb3fa0ba9580,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449616674417451,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-17T00:00:16.341730796Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-967325,Uid:7e291f3c4fc82df664cf258be5a3c5de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449596196277927,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 7e291f3c4fc82df664cf258be5a3c5de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7e291f3c4fc82df664cf258be5a3c5de,kubernetes.io/config.seen: 2024-01-16T23:59:55.632339617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-967325,Uid:877e7c158e0ab06a12806ef1b68814df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449596190964397,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 877e7c158e0ab06a12806ef1b68814df,kubernetes.io/config.seen: 2024-01-16T23:59:55.632338257Z,kubernetes.io/config.source: file,},
RuntimeHandler:,},&PodSandbox{Id:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-967325,Uid:d68f36fbd779a70baeb9f49619aa10a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449596164776216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70baeb9f49619aa10a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.144:8444,kubernetes.io/config.hash: d68f36fbd779a70baeb9f49619aa10a4,kubernetes.io/config.seen: 2024-01-16T23:59:55.632336536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-967325,Uid:063d5d56255116c3
52a6bbd5a5008fde,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705449596114073197,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c352a6bbd5a5008fde,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.144:2379,kubernetes.io/config.hash: 063d5d56255116c352a6bbd5a5008fde,kubernetes.io/config.seen: 2024-01-16T23:59:55.632331047Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1253c238-bd56-424f-9502-eb4eb4fa1e87 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.579827613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9298ce5d-7636-42d3-b706-f83dc13c6cc2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.579924186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9298ce5d-7636-42d3-b706-f83dc13c6cc2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.580173160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9298ce5d-7636-42d3-b706-f83dc13c6cc2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.619198310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fdbb3242-826d-4bae-a224-61c328e4a7ee name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.619288109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fdbb3242-826d-4bae-a224-61c328e4a7ee name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.629283566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4ab763b1-de18-4ac9-8cdf-9052f790eca7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.629933700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450541629910244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4ab763b1-de18-4ac9-8cdf-9052f790eca7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.631043632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c1c07d01-6c5c-42d6-a029-fc3fd6d02c7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.631111148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c1c07d01-6c5c-42d6-a029-fc3fd6d02c7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.631337003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c1c07d01-6c5c-42d6-a029-fc3fd6d02c7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.687328511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4082b97a-20df-4138-8a6e-ddc2d9a656f0 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.687403154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4082b97a-20df-4138-8a6e-ddc2d9a656f0 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.689030868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c2e904f6-e020-4f99-9b18-054405dc2cdf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.689675046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450541689599216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c2e904f6-e020-4f99-9b18-054405dc2cdf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.691178613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82445b3a-b568-4e06-9418-aa106b2633d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.691242723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82445b3a-b568-4e06-9418-aa106b2633d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.691458260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82445b3a-b568-4e06-9418-aa106b2633d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.741285720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=29e9b175-707b-4e83-b7e7-86923e5d2b80 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.741351154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=29e9b175-707b-4e83-b7e7-86923e5d2b80 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.743396859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b2950e2a-b16b-4ae9-83b7-e0ce2f9c4f26 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.744119415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450541744097045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b2950e2a-b16b-4ae9-83b7-e0ce2f9c4f26 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.744888347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6ca33ef-a22f-4bd4-933b-5b91a190d191 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.744992248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6ca33ef-a22f-4bd4-933b-5b91a190d191 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:15:41 default-k8s-diff-port-967325 crio[712]: time="2024-01-17 00:15:41.745239508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837,PodSandboxId:e5c0d80d63fd61566439a2c77a265752fbc449907a14fc6f33135582c522dab0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449620805384621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1859fa-3d3d-42e3-8e25-bc7ad078338e,},Annotations:map[string]string{io.kubernetes.container.hash: 948f152b,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542,PodSandboxId:7867a4fdb5254d4b74e2b9038ef5da8323d7aa9d55016ad49d934f455bcd2206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705449620262067235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2z6bl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580,},Annotations:map[string]string{io.kubernetes.container.hash: c6c29744,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868,PodSandboxId:e0f0ee36f1cffa94b7e5abbabb6cc599e7f458229bb9f79134339e71c7820393,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705449619776204107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gtx6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492a64a7-b9b2-4254-a59c-26feeabeb822,},Annotations:map[string]string{io.kubernetes.container.hash: e335c096,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea,PodSandboxId:586832a72457c9070d81970d24512a2faabbc3daa9c46898aea94410a8bfab4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705449597697037330,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d5d56255116c35
2a6bbd5a5008fde,},Annotations:map[string]string{io.kubernetes.container.hash: 56fb07fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373,PodSandboxId:d928f1b0b9c40e2c215c5d3e69ba242eb7537154a1a568ceb32df9eec871e6f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705449597591375394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e291f3c4fc82df66
4cf258be5a3c5de,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae,PodSandboxId:f708ac268f096df6dab7437e6c644fac7305dfd8f68da44f04c0f5ee41e877c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705449596722160212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68f36fbd779a70ba
eb9f49619aa10a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6429ade8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d,PodSandboxId:c7631b58cdd67b52bd61fee66f3a76cb3066850fb4250cf17b90b57aea3160b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705449596634935730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-967325,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 877e7c158e0ab06a12806ef1b68814df,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6ca33ef-a22f-4bd4-933b-5b91a190d191 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	284632eb250da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   e5c0d80d63fd6       storage-provisioner
	a7769a6a67bd2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   7867a4fdb5254       kube-proxy-2z6bl
	d54e67f6cfd4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   e0f0ee36f1cff       coredns-5dd5756b68-gtx6b
	1fc993cc983de       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   586832a72457c       etcd-default-k8s-diff-port-967325
	40ee2a17afa04       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   d928f1b0b9c40       kube-scheduler-default-k8s-diff-port-967325
	44c04220b559e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   f708ac268f096       kube-apiserver-default-k8s-diff-port-967325
	c733c24fe4cac       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   c7631b58cdd67       kube-controller-manager-default-k8s-diff-port-967325
	
	
	==> coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38319 - 12265 "HINFO IN 7363237114678645592.2750400025902809400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009449709s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-967325
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-967325
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=default-k8s-diff-port-967325
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jan 2024 00:00:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-967325
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jan 2024 00:15:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:10:38 +0000   Tue, 16 Jan 2024 23:59:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:10:38 +0000   Wed, 17 Jan 2024 00:00:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.144
	  Hostname:    default-k8s-diff-port-967325
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f1ef02d86f34c74b49036f31e17dfdd
	  System UUID:                1f1ef02d-86f3-4c74-b490-36f31e17dfdd
	  Boot ID:                    7c4fb655-2a4b-4cbb-ab84-165a343482be
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gtx6b                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-967325                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-967325             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-967325    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2z6bl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-967325             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-dqkll                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-967325 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-967325 event: Registered Node default-k8s-diff-port-967325 in Controller
	
	
	==> dmesg <==
	[Jan16 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066709] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.427608] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.791055] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135828] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.447613] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 23:55] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.124605] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.183641] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.125289] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.259460] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.957176] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[ +19.327094] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 23:59] systemd-fstab-generator[3480]: Ignoring "noauto" for root device
	[Jan17 00:00] systemd-fstab-generator[3807]: Ignoring "noauto" for root device
	[ +13.096569] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] <==
	{"level":"info","ts":"2024-01-16T23:59:59.346349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 received MsgVoteResp from e83b713187665a36 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e83b713187665a36 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.346372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e83b713187665a36 elected leader e83b713187665a36 at term 2"}
	{"level":"info","ts":"2024-01-16T23:59:59.347909Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.349092Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e83b713187665a36","local-member-attributes":"{Name:default-k8s-diff-port-967325 ClientURLs:[https://192.168.61.144:2379]}","request-path":"/0/members/e83b713187665a36/attributes","cluster-id":"2e42f40dd5a31940","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T23:59:59.349413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:59.349983Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2e42f40dd5a31940","local-member-id":"e83b713187665a36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350161Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T23:59:59.350214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:59.350238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T23:59:59.350262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T23:59:59.35117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T23:59:59.358285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.144:2379"}
	{"level":"info","ts":"2024-01-17T00:09:59.387271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-01-17T00:09:59.390984Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":723,"took":"2.875702ms","hash":2448618449}
	{"level":"info","ts":"2024-01-17T00:09:59.391084Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2448618449,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-01-17T00:14:50.137157Z","caller":"traceutil/trace.go:171","msg":"trace[1635004686] linearizableReadLoop","detail":"{readStateIndex:1393; appliedIndex:1392; }","duration":"111.157049ms","start":"2024-01-17T00:14:50.025932Z","end":"2024-01-17T00:14:50.137089Z","steps":["trace[1635004686] 'read index received'  (duration: 110.913655ms)","trace[1635004686] 'applied index is now lower than readState.Index'  (duration: 242.332µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-17T00:14:50.137591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.557072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-17T00:14:50.137737Z","caller":"traceutil/trace.go:171","msg":"trace[1500568923] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1201; }","duration":"111.812309ms","start":"2024-01-17T00:14:50.025906Z","end":"2024-01-17T00:14:50.137718Z","steps":["trace[1500568923] 'agreement among raft nodes before linearized reading'  (duration: 111.512829ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-17T00:14:50.13797Z","caller":"traceutil/trace.go:171","msg":"trace[1685512710] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"246.91801ms","start":"2024-01-17T00:14:49.891033Z","end":"2024-01-17T00:14:50.137951Z","steps":["trace[1685512710] 'process raft request'  (duration: 245.872828ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-17T00:14:59.395591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-01-17T00:14:59.397504Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":966,"took":"1.527786ms","hash":1396419432}
	{"level":"info","ts":"2024-01-17T00:14:59.397566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1396419432,"revision":966,"compact-revision":723}
	
	
	==> kernel <==
	 00:15:42 up 20 min,  0 users,  load average: 0.01, 0.08, 0.14
	Linux default-k8s-diff-port-967325 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] <==
	E0117 00:11:01.854418       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:11:01.854426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:12:00.743673       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0117 00:13:00.742997       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:13:01.853830       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:13:01.853990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:13:01.854043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:13:01.855129       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:13:01.855244       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:13:01.855278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:14:00.743532       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0117 00:15:00.743087       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:15:00.858470       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:15:00.858713       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:15:00.859162       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0117 00:15:01.859696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:15:01.859790       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:15:01.859799       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0117 00:15:01.859826       1 handler_proxy.go:93] no RequestInfo found in the context
	E0117 00:15:01.859894       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0117 00:15:01.860940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] <==
	I0117 00:09:46.423105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:15.926011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:16.432793       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:10:45.932175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:10:46.442610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:11:12.308774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="374.046µs"
	E0117 00:11:15.941209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:16.451349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0117 00:11:24.305534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="124.499µs"
	E0117 00:11:45.948310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:11:46.461868       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:15.954358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:16.470022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:12:45.961065       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:12:46.478940       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:15.967840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:16.489959       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:13:45.973438       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:13:46.499063       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:15.979526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:16.509288       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:14:45.990050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:14:46.519517       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0117 00:15:15.997897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0117 00:15:16.532047       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] <==
	I0117 00:00:20.941465       1 server_others.go:69] "Using iptables proxy"
	I0117 00:00:20.972575       1 node.go:141] Successfully retrieved node IP: 192.168.61.144
	I0117 00:00:21.049024       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0117 00:00:21.049063       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0117 00:00:21.051568       1 server_others.go:152] "Using iptables Proxier"
	I0117 00:00:21.051753       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0117 00:00:21.051928       1 server.go:846] "Version info" version="v1.28.4"
	I0117 00:00:21.051960       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0117 00:00:21.054580       1 config.go:188] "Starting service config controller"
	I0117 00:00:21.054960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0117 00:00:21.055074       1 config.go:97] "Starting endpoint slice config controller"
	I0117 00:00:21.055104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0117 00:00:21.059744       1 config.go:315] "Starting node config controller"
	I0117 00:00:21.059856       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0117 00:00:21.155995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0117 00:00:21.156109       1 shared_informer.go:318] Caches are synced for service config
	I0117 00:00:21.160112       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] <==
	W0117 00:00:00.891301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0117 00:00:00.891353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0117 00:00:00.891468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0117 00:00:00.891502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0117 00:00:01.761053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0117 00:00:01.761103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0117 00:00:01.762943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0117 00:00:01.763079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0117 00:00:01.813484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0117 00:00:01.813577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0117 00:00:01.829708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0117 00:00:01.829940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0117 00:00:01.871355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0117 00:00:01.871383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0117 00:00:01.925199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0117 00:00:01.925326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0117 00:00:02.084340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0117 00:00:02.084458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0117 00:00:02.179410       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0117 00:00:02.179548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0117 00:00:02.193972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0117 00:00:02.194021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0117 00:00:02.378459       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0117 00:00:02.378483       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0117 00:00:04.264916       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:54:53 UTC, ends at Wed 2024-01-17 00:15:42 UTC. --
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:04.301606    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:13:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:13:05 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:05.286069    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:20 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:20.286938    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:32 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:32.287154    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:13:47 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:13:47.287157    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:14:01 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:01.285993    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:14:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:04.301892    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:14:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:14:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:14:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:14:15 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:15.285534    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:14:29 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:29.286297    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:14:43 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:43.285965    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:14:58 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:14:58.285944    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:15:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:15:04.301504    3814 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 17 00:15:04 default-k8s-diff-port-967325 kubelet[3814]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 17 00:15:04 default-k8s-diff-port-967325 kubelet[3814]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 17 00:15:04 default-k8s-diff-port-967325 kubelet[3814]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 17 00:15:04 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:15:04.541715    3814 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 17 00:15:13 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:15:13.285767    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:15:24 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:15:24.286888    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	Jan 17 00:15:36 default-k8s-diff-port-967325 kubelet[3814]: E0117 00:15:36.286364    3814 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dqkll" podUID="7120ca9d-d404-47b7-90d9-3e2609c8b60b"
	
	
	==> storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] <==
	I0117 00:00:20.998440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0117 00:00:21.010905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0117 00:00:21.010994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0117 00:00:21.022237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0117 00:00:21.023981       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3!
	I0117 00:00:21.029771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1de6f6eb-91f7-4996-afe5-42c5f34c038f", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3 became leader
	I0117 00:00:21.124929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-967325_029a8914-b44c-4bb9-9ff7-18503f7dd5c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dqkll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll: exit status 1 (65.525332ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dqkll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-967325 describe pod metrics-server-57f55c9bc5-dqkll: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (125.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-771669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-771669 --alsologtostderr -v=1: signal: killed (7.12314ms)
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p old-k8s-version-771669 --alsologtostderr -v=1 failed: signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25: (1.474971628s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-771669 image                           | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:11 UTC. --
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.808057736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450451808042384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fe5d6bff-e122-44b5-bc83-0642a585726d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.808482968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbc15d71-fed8-47f7-bc79-a2fd004d2383 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.808527530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbc15d71-fed8-47f7-bc79-a2fd004d2383 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.808716681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbc15d71-fed8-47f7-bc79-a2fd004d2383 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.844162566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=53bb8d45-1815-4342-9999-462251986137 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.844245735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=53bb8d45-1815-4342-9999-462251986137 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.845694682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c4d5c959-f3ff-4ce9-9a85-953b8b64c284 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.846206187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450451846190454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c4d5c959-f3ff-4ce9-9a85-953b8b64c284 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.846822426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aec25331-3726-487d-b163-b2ce0acd3ff9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.846867186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aec25331-3726-487d-b163-b2ce0acd3ff9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.847140084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aec25331-3726-487d-b163-b2ce0acd3ff9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.881588638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=62b4c0cb-2802-4865-94ad-d24c3f0eebbb name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.881646936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=62b4c0cb-2802-4865-94ad-d24c3f0eebbb name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.882776868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f7964b99-7756-4004-8242-e92c3e59cd72 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.883205685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450451883188025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f7964b99-7756-4004-8242-e92c3e59cd72 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.883830688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c3ea10c-4d41-42c8-9ed0-e2face728560 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.883880555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c3ea10c-4d41-42c8-9ed0-e2face728560 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.884125544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c3ea10c-4d41-42c8-9ed0-e2face728560 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.922775496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=69ef3d3d-8a4c-44ad-b334-05f7ab91ebe8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.922844998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=69ef3d3d-8a4c-44ad-b334-05f7ab91ebe8 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.923727174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5df87a95-1dbe-4de5-9e55-77e09e59d90d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.924198631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450451924182799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5df87a95-1dbe-4de5-9e55-77e09e59d90d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.924691707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a840abc6-c127-48ee-b179-57d6e08e5313 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.924735823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a840abc6-c127-48ee-b179-57d6e08e5313 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:11 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:11.924905723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a840abc6-c127-48ee-b179-57d6e08e5313 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9459eba4162be       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   69a4cbb576850       busybox
	21a6dceb568ad       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   861a780833a2d       coredns-5644d7b6d9-9njqp
	5cbd938949134       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   51a17462d718a       storage-provisioner
	a613a4e4ddfe3       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   9e58ca8a29986       kube-proxy-9ghls
	7a937abd3b903       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   453bb94b5ee72       etcd-old-k8s-version-771669
	f4999acc2d6d7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   5f2e4e8fdc564       kube-apiserver-old-k8s-version-771669
	911f813160b15       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   e3d35b7aba356       kube-controller-manager-old-k8s-version-771669
	494f74041efd3       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   13d26353ba2d4       kube-scheduler-old-k8s-version-771669
	
	
	==> coredns [21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942] <==
	E0116 23:46:10.187359       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.193152       1 trace.go:82] Trace[785493325]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.186709268 +0000 UTC m=+0.081907198) (total time: 30.006404152s):
	Trace[785493325]: [30.006404152s] [30.006404152s] END
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.200490       1 trace.go:82] Trace[1301817211]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.19394028 +0000 UTC m=+0.089138224) (total time: 30.006532947s):
	Trace[1301817211]: [30.006532947s] [30.006532947s] END
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-16T23:46:15.289Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2024-01-16T23:46:15.321Z [INFO] 127.0.0.1:57441 - 44193 "HINFO IN 1365412375578555759.7322076794870044211. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008071628s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-16T23:55:55.993Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-16T23:55:55.993Z [INFO] CoreDNS-1.6.2
	2024-01-16T23:55:55.993Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T23:55:56.003Z [INFO] 127.0.0.1:59166 - 17216 "HINFO IN 9081841845838306910.8543492278547947642. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009686681s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-771669
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-771669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=old-k8s-version-771669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_45_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    old-k8s-version-771669
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0599c334d1574c44852cd606008f4484
	 System UUID:                0599c334-d157-4c44-852c-d606008f4484
	 Boot ID:                    6a822f71-f4d9-4098-87a2-3d00d7bd6120
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-9njqp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-771669                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-771669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-771669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-9ghls                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-771669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-gj4zn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-771669     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x7 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-771669     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074468] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.864255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569582] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135010] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.485542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.831981] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.125426] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.166674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.156891] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.236650] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +18.743957] systemd-fstab-generator[1024]: Ignoring "noauto" for root device
	[  +0.411438] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan16 23:56] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174] <==
	2024-01-16 23:55:46.463616 I | etcdserver: restarting member d80e54998a205cf3 in cluster fe5d4cbbe2066f7 at commit index 527
	2024-01-16 23:55:46.463912 I | raft: d80e54998a205cf3 became follower at term 2
	2024-01-16 23:55:46.463954 I | raft: newRaft d80e54998a205cf3 [peers: [], term: 2, commit: 527, applied: 0, lastindex: 527, lastterm: 2]
	2024-01-16 23:55:46.471794 W | auth: simple token is not cryptographically signed
	2024-01-16 23:55:46.474478 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 23:55:46.476050 I | etcdserver/membership: added member d80e54998a205cf3 [https://192.168.72.114:2380] to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:46.476228 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 23:55:46.476294 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 23:55:46.476369 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 23:55:46.476491 I | embed: listening for metrics on http://192.168.72.114:2381
	2024-01-16 23:55:46.477296 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 23:55:48.264496 I | raft: d80e54998a205cf3 is starting a new election at term 2
	2024-01-16 23:55:48.264548 I | raft: d80e54998a205cf3 became candidate at term 3
	2024-01-16 23:55:48.264567 I | raft: d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.264578 I | raft: d80e54998a205cf3 became leader at term 3
	2024-01-16 23:55:48.264584 I | raft: raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.266381 I | etcdserver: published {Name:old-k8s-version-771669 ClientURLs:[https://192.168.72.114:2379]} to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:48.266872 I | embed: ready to serve client requests
	2024-01-16 23:55:48.267138 I | embed: ready to serve client requests
	2024-01-16 23:55:48.268857 I | embed: serving client requests on 192.168.72.114:2379
	2024-01-16 23:55:48.272176 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-17 00:05:48.299555 I | mvcc: store.index: compact 831
	2024-01-17 00:05:48.301444 I | mvcc: finished scheduled compaction at 831 (took 1.48289ms)
	2024-01-17 00:10:48.307018 I | mvcc: store.index: compact 1049
	2024-01-17 00:10:48.309423 I | mvcc: finished scheduled compaction at 1049 (took 1.556943ms)
	
	
	==> kernel <==
	 00:14:12 up 19 min,  0 users,  load average: 0.18, 0.15, 0.10
	Linux old-k8s-version-771669 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877] <==
	I0117 00:06:52.568125       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:06:52.568324       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:06:52.568428       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:06:52.568460       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:08:52.568751       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:08:52.569130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:08:52.569216       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:08:52.569239       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:10:52.570364       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:10:52.570659       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:10:52.570748       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:52.570771       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:11:52.571161       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:11:52.571452       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:11:52.571520       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:11:52.571559       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:13:52.571887       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:13:52.572257       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:13:52.572431       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:13:52.572517       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f] <==
	E0117 00:07:44.159270       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:07:54.492892       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:14.411306       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:26.495091       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:44.663350       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:58.497544       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:14.915110       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:09:30.499632       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:45.167228       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:02.502151       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:15.419628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:34.504463       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:45.671634       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:06.506665       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:15.924066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:38.508658       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:46.176241       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:10.510374       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:16.428278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:42.512853       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:46.680268       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:14.515039       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:16.932502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:46.516781       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:47.184739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7] <==
	W0116 23:45:41.007361       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:45:41.016329       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:45:41.016352       1 server_others.go:149] Using iptables Proxier.
	I0116 23:45:41.016667       1 server.go:529] Version: v1.16.0
	I0116 23:45:41.018410       1 config.go:131] Starting endpoints config controller
	I0116 23:45:41.024018       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:45:41.018730       1 config.go:313] Starting service config controller
	I0116 23:45:41.024397       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:45:41.124802       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:45:41.125007       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0116 23:55:53.969591       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:55:53.981521       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:55:53.981589       1 server_others.go:149] Using iptables Proxier.
	I0116 23:55:53.982391       1 server.go:529] Version: v1.16.0
	I0116 23:55:53.983881       1 config.go:313] Starting service config controller
	I0116 23:55:53.983929       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:55:53.984039       1 config.go:131] Starting endpoints config controller
	I0116 23:55:53.984056       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:55:54.084183       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:55:54.084427       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d] <==
	E0116 23:45:19.290133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:45:19.293479       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 23:45:19.294843       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 23:45:19.296276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:45:19.297284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 23:45:19.302219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:45:19.306970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:45:19.307150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.307930       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.308102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:55:45.888159       1 serving.go:319] Generated self-signed cert in-memory
	W0116 23:55:51.429069       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:55:51.429295       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:55:51.429326       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:55:51.429407       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:55:51.479301       1 server.go:143] Version: v1.16.0
	I0116 23:55:51.479424       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0116 23:55:51.496560       1 authorization.go:47] Authorization is disabled
	W0116 23:55:51.496594       1 authentication.go:79] Authentication is disabled
	I0116 23:55:51.496610       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 23:55:51.497402       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 23:55:51.544869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:55:51.545090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:12 UTC. --
	Jan 17 00:09:41 old-k8s-version-771669 kubelet[1030]: E0117 00:09:41.444444    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:09:54 old-k8s-version-771669 kubelet[1030]: E0117 00:09:54.444474    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:06 old-k8s-version-771669 kubelet[1030]: E0117 00:10:06.444489    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:19 old-k8s-version-771669 kubelet[1030]: E0117 00:10:19.444124    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:32 old-k8s-version-771669 kubelet[1030]: E0117 00:10:32.444520    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:43 old-k8s-version-771669 kubelet[1030]: E0117 00:10:43.517317    1030 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 17 00:10:45 old-k8s-version-771669 kubelet[1030]: E0117 00:10:45.444419    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:57 old-k8s-version-771669 kubelet[1030]: E0117 00:10:57.446558    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:12 old-k8s-version-771669 kubelet[1030]: E0117 00:11:12.444116    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:23 old-k8s-version-771669 kubelet[1030]: E0117 00:11:23.449289    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:38 old-k8s-version-771669 kubelet[1030]: E0117 00:11:38.444251    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455296    1030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455380    1030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455430    1030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455461    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 17 00:12:03 old-k8s-version-771669 kubelet[1030]: E0117 00:12:03.444940    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:17 old-k8s-version-771669 kubelet[1030]: E0117 00:12:17.445881    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:30 old-k8s-version-771669 kubelet[1030]: E0117 00:12:30.445107    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:42 old-k8s-version-771669 kubelet[1030]: E0117 00:12:42.444153    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:53 old-k8s-version-771669 kubelet[1030]: E0117 00:12:53.445269    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:07 old-k8s-version-771669 kubelet[1030]: E0117 00:13:07.444539    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:21 old-k8s-version-771669 kubelet[1030]: E0117 00:13:21.444749    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:34 old-k8s-version-771669 kubelet[1030]: E0117 00:13:34.444779    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:47 old-k8s-version-771669 kubelet[1030]: E0117 00:13:47.445167    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:14:01 old-k8s-version-771669 kubelet[1030]: E0117 00:14:01.444446    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3] <==
	I0116 23:45:41.784762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:45:41.799195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:45:41.799369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:45:41.808193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:45:41.809025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:45:41.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4 became leader
	I0116 23:45:41.909835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:55:55.015814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:55.084172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:55.084535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:56:12.492253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:56:12.492881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	I0116 23:56:12.493615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0 became leader
	I0116 23:56:12.593934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-771669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-gj4zn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1 (65.081725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-gj4zn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-771669 logs -n 25: (1.487215476s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo                                  | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo find                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-097488 sudo crio                             | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-097488                                       | bridge-097488                | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-123117 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:45 UTC |
	|         | disable-driver-mounts-123117                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:45 UTC | 16 Jan 24 23:47 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-771669        | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC | 16 Jan 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-085322             | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-837871            | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC | 16 Jan 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-967325  | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC | 16 Jan 24 23:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:48 UTC |                     |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-771669             | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-771669                              | old-k8s-version-771669       | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-085322                  | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-085322                                   | no-preload-085322            | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC | 16 Jan 24 23:59 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-837871                 | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-837871                                  | embed-certs-837871           | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-967325       | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-967325 | jenkins | v1.32.0 | 16 Jan 24 23:50 UTC | 17 Jan 24 00:04 UTC |
	|         | default-k8s-diff-port-967325                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-771669 image                           | old-k8s-version-771669       | jenkins | v1.32.0 | 17 Jan 24 00:14 UTC | 17 Jan 24 00:14 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 23:50:38.759760   60269 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:50:38.759896   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.759907   60269 out.go:309] Setting ErrFile to fd 2...
	I0116 23:50:38.759914   60269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:50:38.760126   60269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:50:38.760678   60269 out.go:303] Setting JSON to false
	I0116 23:50:38.761641   60269 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5585,"bootTime":1705443454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:50:38.761709   60269 start.go:138] virtualization: kvm guest
	I0116 23:50:38.763997   60269 out.go:177] * [default-k8s-diff-port-967325] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:50:38.765368   60269 notify.go:220] Checking for updates...
	I0116 23:50:38.767255   60269 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:50:38.768689   60269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:50:38.770002   60269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:50:38.771265   60269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:50:38.772478   60269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:50:38.773887   60269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:50:38.775771   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:50:38.776343   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.776406   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.790484   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0116 23:50:38.790881   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.791331   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.791354   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.791767   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.791948   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.792207   60269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:50:38.792478   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:50:38.792512   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:50:38.806373   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0116 23:50:38.806769   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:50:38.807352   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:50:38.807377   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:50:38.807713   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:50:38.807888   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:50:38.844486   60269 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 23:50:38.845772   60269 start.go:298] selected driver: kvm2
	I0116 23:50:38.845786   60269 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.845896   60269 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:50:38.846669   60269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.846746   60269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 23:50:38.861437   60269 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 23:50:38.861794   60269 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 23:50:38.861869   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:50:38.861886   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:50:38.861903   60269 start_flags.go:321] config:
	{Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-96732
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:50:38.862070   60269 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 23:50:38.864512   60269 out.go:177] * Starting control plane node default-k8s-diff-port-967325 in cluster default-k8s-diff-port-967325
	I0116 23:50:35.694534   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.766489   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:38.865813   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:50:38.865854   60269 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 23:50:38.865868   60269 cache.go:56] Caching tarball of preloaded images
	I0116 23:50:38.865946   60269 preload.go:174] Found /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 23:50:38.865958   60269 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 23:50:38.866067   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:50:38.866254   60269 start.go:365] acquiring machines lock for default-k8s-diff-port-967325: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:50:44.846593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:47.918614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:53.998619   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:50:57.070626   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:03.150612   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:06.222615   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:12.302594   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:15.374637   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:21.454609   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:24.526620   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:30.606636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:33.678599   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:39.758623   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:42.830638   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:48.910588   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:51.982570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:51:58.062585   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:01.134627   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:07.214606   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:10.286692   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:16.366642   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:19.438617   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:25.518614   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:28.590572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:34.670577   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:37.742593   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:43.822547   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:46.894566   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:52.974586   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:52:56.046663   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:02.126625   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:05.198647   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:11.278567   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:14.350629   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:20.430640   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:23.502572   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:29.582639   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:32.654601   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:38.734636   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:41.806621   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:47.886613   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:50.958654   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:53:57.038576   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:00.110570   59622 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I0116 23:54:03.114737   59938 start.go:369] acquired machines lock for "no-preload-085322" in 4m4.444202574s
	I0116 23:54:03.114809   59938 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:03.114817   59938 fix.go:54] fixHost starting: 
	I0116 23:54:03.115151   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:03.115188   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:03.129740   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0116 23:54:03.130141   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:03.130598   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:54:03.130619   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:03.130926   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:03.131095   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:03.131232   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:54:03.132851   59938 fix.go:102] recreateIfNeeded on no-preload-085322: state=Stopped err=<nil>
	I0116 23:54:03.132873   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	W0116 23:54:03.133043   59938 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:03.134884   59938 out.go:177] * Restarting existing kvm2 VM for "no-preload-085322" ...
	I0116 23:54:03.136262   59938 main.go:141] libmachine: (no-preload-085322) Calling .Start
	I0116 23:54:03.136432   59938 main.go:141] libmachine: (no-preload-085322) Ensuring networks are active...
	I0116 23:54:03.137113   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network default is active
	I0116 23:54:03.137528   59938 main.go:141] libmachine: (no-preload-085322) Ensuring network mk-no-preload-085322 is active
	I0116 23:54:03.137880   59938 main.go:141] libmachine: (no-preload-085322) Getting domain xml...
	I0116 23:54:03.138613   59938 main.go:141] libmachine: (no-preload-085322) Creating domain...
	I0116 23:54:03.112375   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:03.112409   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:54:03.114601   59622 machine.go:91] provisioned docker machine in 4m37.41859178s
	I0116 23:54:03.114647   59622 fix.go:56] fixHost completed within 4m37.439054279s
	I0116 23:54:03.114654   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 4m37.439073197s
	W0116 23:54:03.114678   59622 start.go:694] error starting host: provision: host is not running
	W0116 23:54:03.114769   59622 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 23:54:03.114780   59622 start.go:709] Will try again in 5 seconds ...
	I0116 23:54:04.327758   59938 main.go:141] libmachine: (no-preload-085322) Waiting to get IP...
	I0116 23:54:04.328580   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.329077   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.329172   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.329065   60794 retry.go:31] will retry after 242.417074ms: waiting for machine to come up
	I0116 23:54:04.573623   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.574286   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.574314   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.574234   60794 retry.go:31] will retry after 376.338621ms: waiting for machine to come up
	I0116 23:54:04.952081   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:04.952569   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:04.952609   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:04.952512   60794 retry.go:31] will retry after 437.645823ms: waiting for machine to come up
	I0116 23:54:05.392169   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.392672   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.392701   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.392621   60794 retry.go:31] will retry after 422.797207ms: waiting for machine to come up
	I0116 23:54:05.817196   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:05.817610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:05.817639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:05.817571   60794 retry.go:31] will retry after 640.372887ms: waiting for machine to come up
	I0116 23:54:06.459387   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:06.459792   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:06.459822   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:06.459719   60794 retry.go:31] will retry after 683.537292ms: waiting for machine to come up
	I0116 23:54:07.144668   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:07.144994   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:07.145027   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:07.144980   60794 retry.go:31] will retry after 898.931175ms: waiting for machine to come up
	I0116 23:54:08.045022   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:08.045409   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:08.045437   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:08.045355   60794 retry.go:31] will retry after 1.288697598s: waiting for machine to come up
	I0116 23:54:08.117270   59622 start.go:365] acquiring machines lock for old-k8s-version-771669: {Name:mkbb7ac5518f9293e687bfd88167ecc50b976d18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 23:54:09.335202   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:09.335610   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:09.335639   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:09.335546   60794 retry.go:31] will retry after 1.355850443s: waiting for machine to come up
	I0116 23:54:10.693078   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:10.693554   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:10.693606   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:10.693520   60794 retry.go:31] will retry after 1.916329826s: waiting for machine to come up
	I0116 23:54:12.611840   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:12.612332   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:12.612367   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:12.612282   60794 retry.go:31] will retry after 2.556862035s: waiting for machine to come up
	I0116 23:54:15.171589   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:15.172039   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:15.172068   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:15.171972   60794 retry.go:31] will retry after 2.519530929s: waiting for machine to come up
	I0116 23:54:17.694557   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:17.694939   59938 main.go:141] libmachine: (no-preload-085322) DBG | unable to find current IP address of domain no-preload-085322 in network mk-no-preload-085322
	I0116 23:54:17.694968   59938 main.go:141] libmachine: (no-preload-085322) DBG | I0116 23:54:17.694886   60794 retry.go:31] will retry after 3.090458186s: waiting for machine to come up
	I0116 23:54:21.986927   60073 start.go:369] acquired machines lock for "embed-certs-837871" in 4m12.827160117s
	I0116 23:54:21.986990   60073 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:21.986998   60073 fix.go:54] fixHost starting: 
	I0116 23:54:21.987380   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:21.987421   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:22.004600   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0116 23:54:22.004995   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:22.005467   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:54:22.005496   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:22.005829   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:22.006029   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:22.006185   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:54:22.008077   60073 fix.go:102] recreateIfNeeded on embed-certs-837871: state=Stopped err=<nil>
	I0116 23:54:22.008103   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	W0116 23:54:22.008290   60073 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:22.010638   60073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-837871" ...
	I0116 23:54:20.788433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788853   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has current primary IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.788879   59938 main.go:141] libmachine: (no-preload-085322) Found IP for machine: 192.168.50.183
	I0116 23:54:20.788893   59938 main.go:141] libmachine: (no-preload-085322) Reserving static IP address...
	I0116 23:54:20.789229   59938 main.go:141] libmachine: (no-preload-085322) Reserved static IP address: 192.168.50.183
	I0116 23:54:20.789275   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.789290   59938 main.go:141] libmachine: (no-preload-085322) Waiting for SSH to be available...
	I0116 23:54:20.789318   59938 main.go:141] libmachine: (no-preload-085322) DBG | skip adding static IP to network mk-no-preload-085322 - found existing host DHCP lease matching {name: "no-preload-085322", mac: "52:54:00:57:25:4d", ip: "192.168.50.183"}
	I0116 23:54:20.789337   59938 main.go:141] libmachine: (no-preload-085322) DBG | Getting to WaitForSSH function...
	I0116 23:54:20.791667   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792013   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.792054   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.792155   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH client type: external
	I0116 23:54:20.792182   59938 main.go:141] libmachine: (no-preload-085322) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa (-rw-------)
	I0116 23:54:20.792239   59938 main.go:141] libmachine: (no-preload-085322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:20.792264   59938 main.go:141] libmachine: (no-preload-085322) DBG | About to run SSH command:
	I0116 23:54:20.792282   59938 main.go:141] libmachine: (no-preload-085322) DBG | exit 0
	I0116 23:54:20.878320   59938 main.go:141] libmachine: (no-preload-085322) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:20.878650   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetConfigRaw
	I0116 23:54:20.879331   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:20.881964   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882374   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.882410   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.882680   59938 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/config.json ...
	I0116 23:54:20.882904   59938 machine.go:88] provisioning docker machine ...
	I0116 23:54:20.882923   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:20.883142   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883335   59938 buildroot.go:166] provisioning hostname "no-preload-085322"
	I0116 23:54:20.883356   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:20.883553   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:20.885549   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.885943   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:20.885978   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:20.886040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:20.886216   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:20.886593   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:20.886774   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:20.887119   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:20.887134   59938 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-085322 && echo "no-preload-085322" | sudo tee /etc/hostname
	I0116 23:54:21.013385   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-085322
	
	I0116 23:54:21.013408   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.016312   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016630   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.016670   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.016859   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.017058   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017252   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.017386   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.017557   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.017929   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.017956   59938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-085322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-085322/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-085322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:21.135238   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:21.135270   59938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:21.135289   59938 buildroot.go:174] setting up certificates
	I0116 23:54:21.135313   59938 provision.go:83] configureAuth start
	I0116 23:54:21.135326   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetMachineName
	I0116 23:54:21.135618   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.138168   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138443   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.138470   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.138654   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.140789   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141120   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.141147   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.141324   59938 provision.go:138] copyHostCerts
	I0116 23:54:21.141367   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:21.141377   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:21.141447   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:21.141550   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:21.141561   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:21.141599   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:21.141671   59938 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:21.141682   59938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:21.141714   59938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:21.141791   59938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.no-preload-085322 san=[192.168.50.183 192.168.50.183 localhost 127.0.0.1 minikube no-preload-085322]
	I0116 23:54:21.265735   59938 provision.go:172] copyRemoteCerts
	I0116 23:54:21.265800   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:21.265825   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.268291   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268647   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.268676   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.268842   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.269076   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.269250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.269383   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.351116   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:21.373208   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 23:54:21.395440   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 23:54:21.418028   59938 provision.go:86] duration metric: configureAuth took 282.698913ms
	I0116 23:54:21.418069   59938 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:21.418298   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:54:21.418409   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.421433   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421751   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.421792   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.421959   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.422191   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422369   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.422491   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.422646   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.422977   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.422995   59938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:21.743469   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:21.743502   59938 machine.go:91] provisioned docker machine in 860.58306ms
	I0116 23:54:21.743515   59938 start.go:300] post-start starting for "no-preload-085322" (driver="kvm2")
	I0116 23:54:21.743538   59938 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:21.743558   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.743870   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:21.743898   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.746430   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746786   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.746823   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.746957   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.747146   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.747302   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.747394   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.837160   59938 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:21.841116   59938 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:21.841157   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:21.841249   59938 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:21.841329   59938 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:21.841413   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:21.849407   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:21.872039   59938 start.go:303] post-start completed in 128.504699ms
	I0116 23:54:21.872072   59938 fix.go:56] fixHost completed within 18.75725342s
	I0116 23:54:21.872110   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.874707   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875214   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.875240   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.875487   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.875722   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.875867   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.876032   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.876210   59938 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:21.876556   59938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I0116 23:54:21.876570   59938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:21.986781   59938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449261.939803143
	
	I0116 23:54:21.986801   59938 fix.go:206] guest clock: 1705449261.939803143
	I0116 23:54:21.986809   59938 fix.go:219] Guest: 2024-01-16 23:54:21.939803143 +0000 UTC Remote: 2024-01-16 23:54:21.872075872 +0000 UTC m=+263.353199909 (delta=67.727271ms)
	I0116 23:54:21.986830   59938 fix.go:190] guest clock delta is within tolerance: 67.727271ms
	I0116 23:54:21.986836   59938 start.go:83] releasing machines lock for "no-preload-085322", held for 18.872049435s
	I0116 23:54:21.986866   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.987132   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:21.990038   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990450   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.990479   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.990658   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991145   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991340   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:54:21.991433   59938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:21.991476   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.991598   59938 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:21.991622   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:54:21.994160   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994384   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994588   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994611   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.994696   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.994864   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.994879   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:21.994956   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:21.995040   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:54:21.995116   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995212   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:54:21.995279   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:21.995338   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:54:21.995469   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:54:22.075709   59938 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:22.113571   59938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:22.255250   59938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:22.261120   59938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:22.261199   59938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:22.275644   59938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:22.275667   59938 start.go:475] detecting cgroup driver to use...
	I0116 23:54:22.275740   59938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:22.292314   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:22.303940   59938 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:22.303994   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:22.316146   59938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:22.328261   59938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:22.429568   59938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:22.545391   59938 docker.go:233] disabling docker service ...
	I0116 23:54:22.545478   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:22.558823   59938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:22.571068   59938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:22.680713   59938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:22.784418   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:22.800751   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:22.819671   59938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:22.819738   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.831950   59938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:22.832019   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.842937   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.853168   59938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:22.863057   59938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:22.873184   59938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:22.881975   59938 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:22.882051   59938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:22.895888   59938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:22.904754   59938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:23.007196   59938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:23.167523   59938 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:23.167604   59938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:23.172603   59938 start.go:543] Will wait 60s for crictl version
	I0116 23:54:23.172661   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.176234   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:23.211267   59938 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:23.211355   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.255175   59938 ssh_runner.go:195] Run: crio --version
	I0116 23:54:23.300404   59938 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 23:54:23.302242   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetIP
	I0116 23:54:23.305445   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.305835   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:54:23.305860   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:54:23.306058   59938 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:23.310150   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:23.321291   59938 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 23:54:23.321348   59938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:23.358829   59938 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 23:54:23.358866   59938 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:54:23.358910   59938 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:23.358974   59938 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.359014   59938 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.359037   59938 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.359019   59938 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 23:54:23.359109   59938 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.359116   59938 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.359192   59938 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360471   59938 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.360486   59938 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.360479   59938 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 23:54:23.360482   59938 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.360503   59938 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.360525   59938 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:22.012196   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Start
	I0116 23:54:22.012405   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring networks are active...
	I0116 23:54:22.013178   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network default is active
	I0116 23:54:22.013529   60073 main.go:141] libmachine: (embed-certs-837871) Ensuring network mk-embed-certs-837871 is active
	I0116 23:54:22.013912   60073 main.go:141] libmachine: (embed-certs-837871) Getting domain xml...
	I0116 23:54:22.014514   60073 main.go:141] libmachine: (embed-certs-837871) Creating domain...
	I0116 23:54:23.261878   60073 main.go:141] libmachine: (embed-certs-837871) Waiting to get IP...
	I0116 23:54:23.263010   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.263550   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.263625   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.263530   60915 retry.go:31] will retry after 307.379701ms: waiting for machine to come up
	I0116 23:54:23.572127   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.572604   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.572640   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.572557   60915 retry.go:31] will retry after 367.767271ms: waiting for machine to come up
	I0116 23:54:23.942420   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:23.942907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:23.942937   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:23.942855   60915 retry.go:31] will retry after 327.227989ms: waiting for machine to come up
	I0116 23:54:23.582933   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.587427   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.591221   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 23:54:23.600943   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.601854   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.620857   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.636430   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.654149   59938 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 23:54:23.654203   59938 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.654256   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.704462   59938 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 23:54:23.704519   59938 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.704571   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851614   59938 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 23:54:23.851646   59938 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 23:54:23.851663   59938 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.851662   59938 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851711   59938 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 23:54:23.851754   59938 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.851767   59938 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 23:54:23.851795   59938 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.851802   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851709   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851832   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 23:54:23.851843   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:23.851845   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 23:54:23.868480   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 23:54:23.906566   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 23:54:23.906609   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.906713   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.927452   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 23:54:23.927455   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 23:54:23.927669   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.927767   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:23.959664   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 23:54:23.959782   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:23.990016   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 23:54:23.990042   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990040   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:23.990089   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 23:54:23.990217   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:24.018967   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019064   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 23:54:24.019080   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:24.019089   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019115   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 23:54:24.019135   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 23:54:24.019160   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:24.164580   59938 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.888709   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.898467269s)
	I0116 23:54:26.888747   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 23:54:26.888768   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888777   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.869591717s)
	I0116 23:54:26.888817   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 23:54:26.888824   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 23:54:26.888710   59938 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.869617277s)
	I0116 23:54:26.888879   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 23:54:26.888856   59938 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.724243534s)
	I0116 23:54:26.888931   59938 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 23:54:26.888965   59938 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:26.889006   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:54:24.271311   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.271747   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.271777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.271695   60915 retry.go:31] will retry after 459.459832ms: waiting for machine to come up
	I0116 23:54:24.732506   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:24.733007   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:24.733036   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:24.732957   60915 retry.go:31] will retry after 584.775753ms: waiting for machine to come up
	I0116 23:54:25.319663   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:25.320171   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:25.320215   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:25.320117   60915 retry.go:31] will retry after 942.568443ms: waiting for machine to come up
	I0116 23:54:26.264735   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:26.265207   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:26.265241   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:26.265152   60915 retry.go:31] will retry after 986.504626ms: waiting for machine to come up
	I0116 23:54:27.253751   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:27.254422   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:27.254451   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:27.254363   60915 retry.go:31] will retry after 1.332096797s: waiting for machine to come up
	I0116 23:54:28.588407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:28.589024   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:28.589057   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:28.588967   60915 retry.go:31] will retry after 1.510766858s: waiting for machine to come up
	I0116 23:54:29.054814   59938 ssh_runner.go:235] Completed: which crictl: (2.165780571s)
	I0116 23:54:29.054899   59938 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:54:29.054938   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.166081855s)
	I0116 23:54:29.054973   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 23:54:29.055002   59938 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:29.055058   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 23:54:32.781289   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.726190592s)
	I0116 23:54:32.781378   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 23:54:32.781384   59938 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.72645917s)
	I0116 23:54:32.781421   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781452   59938 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 23:54:32.781499   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 23:54:32.781549   59938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:32.786061   59938 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 23:54:30.101582   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:30.102035   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:30.102080   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:30.101996   60915 retry.go:31] will retry after 1.681256612s: waiting for machine to come up
	I0116 23:54:31.786133   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:31.786678   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:31.786717   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:31.786625   60915 retry.go:31] will retry after 2.501397759s: waiting for machine to come up
	I0116 23:54:35.155364   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.37383462s)
	I0116 23:54:35.155398   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 23:54:35.155423   59938 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:35.155471   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 23:54:37.035841   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.880336789s)
	I0116 23:54:37.035878   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 23:54:37.035908   59938 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:37.035957   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 23:54:38.382731   59938 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.346744157s)
	I0116 23:54:38.382770   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 23:54:38.382801   59938 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:38.382857   59938 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 23:54:34.289289   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:34.289853   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:34.289876   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:34.289788   60915 retry.go:31] will retry after 2.655614857s: waiting for machine to come up
	I0116 23:54:36.947614   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:36.948090   60073 main.go:141] libmachine: (embed-certs-837871) DBG | unable to find current IP address of domain embed-certs-837871 in network mk-embed-certs-837871
	I0116 23:54:36.948110   60073 main.go:141] libmachine: (embed-certs-837871) DBG | I0116 23:54:36.948022   60915 retry.go:31] will retry after 3.331974558s: waiting for machine to come up
	I0116 23:54:41.527170   60269 start.go:369] acquired machines lock for "default-k8s-diff-port-967325" in 4m2.660883224s
	I0116 23:54:41.527252   60269 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:54:41.527265   60269 fix.go:54] fixHost starting: 
	I0116 23:54:41.527698   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:54:41.527739   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:54:41.544050   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0116 23:54:41.544467   60269 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:54:41.544979   60269 main.go:141] libmachine: Using API Version  1
	I0116 23:54:41.545009   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:54:41.545297   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:54:41.545474   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:54:41.545619   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0116 23:54:41.547250   60269 fix.go:102] recreateIfNeeded on default-k8s-diff-port-967325: state=Stopped err=<nil>
	I0116 23:54:41.547276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	W0116 23:54:41.547440   60269 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:54:41.550415   60269 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-967325" ...
	I0116 23:54:40.284163   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.284689   60073 main.go:141] libmachine: (embed-certs-837871) Found IP for machine: 192.168.39.226
	I0116 23:54:40.284718   60073 main.go:141] libmachine: (embed-certs-837871) Reserving static IP address...
	I0116 23:54:40.284734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has current primary IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.285176   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.285209   60073 main.go:141] libmachine: (embed-certs-837871) DBG | skip adding static IP to network mk-embed-certs-837871 - found existing host DHCP lease matching {name: "embed-certs-837871", mac: "52:54:00:e9:2a:3c", ip: "192.168.39.226"}
	I0116 23:54:40.285223   60073 main.go:141] libmachine: (embed-certs-837871) Reserved static IP address: 192.168.39.226
	I0116 23:54:40.285240   60073 main.go:141] libmachine: (embed-certs-837871) Waiting for SSH to be available...
	I0116 23:54:40.285254   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Getting to WaitForSSH function...
	I0116 23:54:40.287766   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288257   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.288283   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.288417   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH client type: external
	I0116 23:54:40.288441   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa (-rw-------)
	I0116 23:54:40.288466   60073 main.go:141] libmachine: (embed-certs-837871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:54:40.288473   60073 main.go:141] libmachine: (embed-certs-837871) DBG | About to run SSH command:
	I0116 23:54:40.288481   60073 main.go:141] libmachine: (embed-certs-837871) DBG | exit 0
	I0116 23:54:40.374194   60073 main.go:141] libmachine: (embed-certs-837871) DBG | SSH cmd err, output: <nil>: 
	I0116 23:54:40.374646   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetConfigRaw
	I0116 23:54:40.375380   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.378323   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.378843   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.378877   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.379145   60073 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/config.json ...
	I0116 23:54:40.379332   60073 machine.go:88] provisioning docker machine ...
	I0116 23:54:40.379351   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:40.379538   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379712   60073 buildroot.go:166] provisioning hostname "embed-certs-837871"
	I0116 23:54:40.379731   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.379882   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.382022   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382386   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.382408   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.382542   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.382695   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.382833   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.383019   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.383201   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.383686   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.383707   60073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-837871 && echo "embed-certs-837871" | sudo tee /etc/hostname
	I0116 23:54:40.506034   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-837871
	
	I0116 23:54:40.506064   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.508789   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509236   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.509266   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.509427   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.509624   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509782   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.509909   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.510109   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:40.510593   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:40.510620   60073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-837871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-837871/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-837871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:54:40.626272   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:54:40.626298   60073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:54:40.626356   60073 buildroot.go:174] setting up certificates
	I0116 23:54:40.626372   60073 provision.go:83] configureAuth start
	I0116 23:54:40.626383   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetMachineName
	I0116 23:54:40.626705   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:40.629226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629577   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.629605   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.629737   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.631784   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632093   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.632114   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.632249   60073 provision.go:138] copyHostCerts
	I0116 23:54:40.632306   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:54:40.632318   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:54:40.632389   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:54:40.632489   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:54:40.632499   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:54:40.632529   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:54:40.632607   60073 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:54:40.632617   60073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:54:40.632645   60073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:54:40.632705   60073 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.embed-certs-837871 san=[192.168.39.226 192.168.39.226 localhost 127.0.0.1 minikube embed-certs-837871]
	I0116 23:54:40.842680   60073 provision.go:172] copyRemoteCerts
	I0116 23:54:40.842749   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:54:40.842778   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:40.845198   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845585   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:40.845626   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:40.845798   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:40.845987   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:40.846158   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:40.846313   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:40.931372   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:54:40.955528   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:54:40.979724   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 23:54:41.000711   60073 provision.go:86] duration metric: configureAuth took 374.325381ms
	I0116 23:54:41.000743   60073 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:54:41.000988   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:54:41.001078   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.003907   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004226   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.004256   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.004472   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.004703   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.004886   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.005025   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.005172   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.005489   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.005505   60073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:54:41.294820   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:54:41.294846   60073 machine.go:91] provisioned docker machine in 915.500911ms
	I0116 23:54:41.294860   60073 start.go:300] post-start starting for "embed-certs-837871" (driver="kvm2")
	I0116 23:54:41.294873   60073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:54:41.294894   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.295245   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:54:41.295275   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.298053   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298453   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.298482   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.298630   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.298831   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.299028   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.299229   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.383434   60073 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:54:41.387526   60073 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:54:41.387550   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:54:41.387618   60073 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:54:41.387716   60073 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:54:41.387832   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:54:41.395959   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:41.417602   60073 start.go:303] post-start completed in 122.726786ms
	I0116 23:54:41.417634   60073 fix.go:56] fixHost completed within 19.430636017s
	I0116 23:54:41.417657   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.420348   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420665   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.420692   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.420853   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.421099   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421245   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.421386   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.421532   60073 main.go:141] libmachine: Using SSH client type: native
	I0116 23:54:41.421882   60073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0116 23:54:41.421898   60073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:54:41.527026   60073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449281.479666719
	
	I0116 23:54:41.527054   60073 fix.go:206] guest clock: 1705449281.479666719
	I0116 23:54:41.527061   60073 fix.go:219] Guest: 2024-01-16 23:54:41.479666719 +0000 UTC Remote: 2024-01-16 23:54:41.417638777 +0000 UTC m=+272.403645668 (delta=62.027942ms)
	I0116 23:54:41.527080   60073 fix.go:190] guest clock delta is within tolerance: 62.027942ms
	I0116 23:54:41.527085   60073 start.go:83] releasing machines lock for "embed-certs-837871", held for 19.540117712s
	I0116 23:54:41.527105   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.527420   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:41.530393   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.530857   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.530884   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.531031   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531460   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531637   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:54:41.531720   60073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:54:41.531774   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.531821   60073 ssh_runner.go:195] Run: cat /version.json
	I0116 23:54:41.531854   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:54:41.534407   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534578   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534777   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.534819   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.534933   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535031   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:41.535068   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:41.535135   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535229   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:54:41.535308   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535381   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:54:41.535431   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.535512   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:54:41.535633   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:54:41.653469   60073 ssh_runner.go:195] Run: systemctl --version
	I0116 23:54:41.658877   60073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:54:41.797035   60073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:54:41.804397   60073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:54:41.804475   60073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:54:41.819295   60073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:54:41.819319   60073 start.go:475] detecting cgroup driver to use...
	I0116 23:54:41.819382   60073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:54:41.833454   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:54:41.845089   60073 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:54:41.845145   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:54:41.857037   60073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:54:41.869156   60073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:54:41.968252   60073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:54:42.079885   60073 docker.go:233] disabling docker service ...
	I0116 23:54:42.079949   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:54:42.091847   60073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:54:42.102517   60073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:54:42.217275   60073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:54:42.314542   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:54:42.326438   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:54:42.342285   60073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:54:42.342356   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.354962   60073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:54:42.355039   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.367222   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.379029   60073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:54:42.387819   60073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:54:42.396923   60073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:54:42.404505   60073 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:54:42.404567   60073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:54:42.415632   60073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:54:42.423935   60073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:42.520457   60073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:54:42.676659   60073 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:54:42.676727   60073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:54:42.681457   60073 start.go:543] Will wait 60s for crictl version
	I0116 23:54:42.681535   60073 ssh_runner.go:195] Run: which crictl
	I0116 23:54:42.685259   60073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:54:42.728719   60073 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:54:42.728807   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.780603   60073 ssh_runner.go:195] Run: crio --version
	I0116 23:54:42.830363   60073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:54:39.032115   59938 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 23:54:39.032163   59938 cache_images.go:123] Successfully loaded all cached images
	I0116 23:54:39.032171   59938 cache_images.go:92] LoadImages completed in 15.67329231s
	I0116 23:54:39.032335   59938 ssh_runner.go:195] Run: crio config
	I0116 23:54:39.091256   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:39.091279   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:39.091299   59938 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:39.091318   59938 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-085322 NodeName:no-preload-085322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:39.091470   59938 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-085322"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:39.091558   59938 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-085322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:39.091619   59938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 23:54:39.100748   59938 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:39.100805   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:39.108879   59938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 23:54:39.123478   59938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 23:54:39.138234   59938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 23:54:39.153408   59938 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:39.156806   59938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:39.168459   59938 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322 for IP: 192.168.50.183
	I0116 23:54:39.168490   59938 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:39.168630   59938 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:39.168669   59938 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:39.168728   59938 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/client.key
	I0116 23:54:39.168800   59938 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key.c63b40e0
	I0116 23:54:39.168839   59938 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key
	I0116 23:54:39.168946   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:39.168971   59938 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:39.168981   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:39.169006   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:39.169029   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:39.169052   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:39.169104   59938 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:39.169755   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:39.191634   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:54:39.213185   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:39.234431   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/no-preload-085322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:54:39.255434   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:39.277092   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:39.299752   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:39.321124   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:39.342706   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:39.363848   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:39.384588   59938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:39.405641   59938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:39.421517   59938 ssh_runner.go:195] Run: openssl version
	I0116 23:54:39.426839   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:39.435875   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440157   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.440217   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:39.445267   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:39.454308   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:39.463232   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467601   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.467660   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:39.473056   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:39.482143   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:39.491441   59938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495918   59938 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.495984   59938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:39.501453   59938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:39.510832   59938 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:39.515055   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:39.520820   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:39.526190   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:39.531649   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:39.536949   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:39.542406   59938 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:39.547673   59938 kubeadm.go:404] StartCluster: {Name:no-preload-085322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-085322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:39.547793   59938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:39.547843   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:39.584159   59938 cri.go:89] found id: ""
	I0116 23:54:39.584236   59938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:39.592749   59938 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:39.592769   59938 kubeadm.go:636] restartCluster start
	I0116 23:54:39.592830   59938 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:39.600998   59938 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:39.602031   59938 kubeconfig.go:92] found "no-preload-085322" server: "https://192.168.50.183:8443"
	I0116 23:54:39.604410   59938 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:39.612167   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:39.612220   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:39.622740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.112200   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.112274   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.123342   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:40.612980   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:40.613059   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:40.624162   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.112722   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.112787   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.123740   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.612248   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:41.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:41.626135   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.112616   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.112723   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.126872   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:42.612417   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:42.612503   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:42.623787   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.112309   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.112383   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.127168   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:41.551739   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Start
	I0116 23:54:41.551879   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring networks are active...
	I0116 23:54:41.552631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network default is active
	I0116 23:54:41.552977   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Ensuring network mk-default-k8s-diff-port-967325 is active
	I0116 23:54:41.553395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Getting domain xml...
	I0116 23:54:41.554029   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Creating domain...
	I0116 23:54:42.830696   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting to get IP...
	I0116 23:54:42.831669   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832085   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:42.832186   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:42.832069   61077 retry.go:31] will retry after 250.838508ms: waiting for machine to come up
	I0116 23:54:43.084848   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085478   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.085513   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.085378   61077 retry.go:31] will retry after 344.020128ms: waiting for machine to come up
	I0116 23:54:43.430795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431300   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.431329   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.431260   61077 retry.go:31] will retry after 397.588837ms: waiting for machine to come up
	I0116 23:54:42.831766   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetIP
	I0116 23:54:42.834360   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834734   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:54:42.834763   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:54:42.834949   60073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 23:54:42.838761   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:42.853154   60073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:54:42.853222   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:42.890184   60073 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:54:42.890265   60073 ssh_runner.go:195] Run: which lz4
	I0116 23:54:42.894168   60073 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:54:42.898036   60073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:54:42.898066   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:54:43.612492   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:43.612614   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:43.626278   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.112257   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.112377   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.126612   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:44.612241   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:44.612325   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:44.626667   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.112214   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.112305   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.127417   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:45.612957   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:45.613061   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:45.626610   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.112219   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.112324   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.126151   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:46.612419   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:46.612513   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:46.623163   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.112516   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.112621   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.123247   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:47.612620   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:47.612713   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:47.623687   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.112357   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.112460   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.126673   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:43.830893   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831467   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:43.831495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:43.831405   61077 retry.go:31] will retry after 443.763933ms: waiting for machine to come up
	I0116 23:54:44.277218   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.277738   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.277666   61077 retry.go:31] will retry after 534.948362ms: waiting for machine to come up
	I0116 23:54:44.814256   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814634   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:44.814674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:44.814585   61077 retry.go:31] will retry after 942.746702ms: waiting for machine to come up
	I0116 23:54:45.758822   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759311   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:45.759340   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:45.759238   61077 retry.go:31] will retry after 1.189643515s: waiting for machine to come up
	I0116 23:54:46.951211   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:46.951644   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:46.951576   61077 retry.go:31] will retry after 1.124824496s: waiting for machine to come up
	I0116 23:54:48.077539   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.077964   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:48.078001   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:48.077909   61077 retry.go:31] will retry after 1.239334518s: waiting for machine to come up
	I0116 23:54:44.553853   60073 crio.go:444] Took 1.659729 seconds to copy over tarball
	I0116 23:54:44.553941   60073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:54:47.428880   60073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.87490029s)
	I0116 23:54:47.428913   60073 crio.go:451] Took 2.875036 seconds to extract the tarball
	I0116 23:54:47.428921   60073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:54:47.469606   60073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:54:47.521549   60073 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:54:47.521580   60073 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:54:47.521660   60073 ssh_runner.go:195] Run: crio config
	I0116 23:54:47.575254   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:54:47.575276   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:47.575292   60073 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:54:47.575309   60073 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-837871 NodeName:embed-certs-837871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:54:47.575434   60073 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-837871"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:54:47.575518   60073 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-837871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:54:47.575569   60073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:54:47.584525   60073 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:54:47.584604   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:54:47.592958   60073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 23:54:47.608090   60073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:54:47.623862   60073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 23:54:47.640242   60073 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0116 23:54:47.644031   60073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:54:47.658210   60073 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871 for IP: 192.168.39.226
	I0116 23:54:47.658247   60073 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:54:47.658451   60073 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:54:47.658543   60073 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:54:47.658766   60073 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/client.key
	I0116 23:54:47.658866   60073 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key.1754aec7
	I0116 23:54:47.658920   60073 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key
	I0116 23:54:47.659066   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:54:47.659104   60073 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:54:47.659123   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:54:47.659160   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:54:47.659190   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:54:47.659223   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:54:47.659275   60073 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:54:47.659998   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:54:47.687031   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:54:47.713026   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:54:47.738546   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/embed-certs-837871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:54:47.764460   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:54:47.789464   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:54:47.814847   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:54:47.839476   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:54:47.864396   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:54:47.889208   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:54:47.914128   60073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:54:47.935079   60073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:54:47.950932   60073 ssh_runner.go:195] Run: openssl version
	I0116 23:54:47.957306   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:54:47.967238   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972287   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.972338   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:54:47.977862   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:54:47.989326   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:54:47.999739   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004111   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.004170   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:54:48.009425   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:54:48.019822   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:54:48.029871   60073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034154   60073 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.034221   60073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:54:48.039911   60073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:54:48.051585   60073 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:54:48.056576   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:54:48.062200   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:54:48.067931   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:54:48.073393   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:54:48.079291   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:54:48.084923   60073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:54:48.090458   60073 kubeadm.go:404] StartCluster: {Name:embed-certs-837871 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-837871 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:54:48.090572   60073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:54:48.090637   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:48.132138   60073 cri.go:89] found id: ""
	I0116 23:54:48.132214   60073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:54:48.141955   60073 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:54:48.141976   60073 kubeadm.go:636] restartCluster start
	I0116 23:54:48.142032   60073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:54:48.151297   60073 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.152324   60073 kubeconfig.go:92] found "embed-certs-837871" server: "https://192.168.39.226:8443"
	I0116 23:54:48.154585   60073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:54:48.163509   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.163570   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.175536   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.664083   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.664180   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:48.676605   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:48.613067   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:48.992894   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.004266   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.112494   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.112595   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.123795   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.612548   59938 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.612642   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.626676   59938 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.626707   59938 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:49.626718   59938 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:49.626732   59938 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:49.626806   59938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:49.668119   59938 cri.go:89] found id: ""
	I0116 23:54:49.668192   59938 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:49.682918   59938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:49.691744   59938 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:49.691817   59938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700863   59938 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:49.700895   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:49.815616   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.020421   59938 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.204764214s)
	I0116 23:54:51.020454   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.216832   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.332109   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:51.399376   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:51.399475   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:51.899827   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.400392   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:52.899528   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.399686   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:49.319244   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319686   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:49.319717   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:49.319624   61077 retry.go:31] will retry after 1.922153535s: waiting for machine to come up
	I0116 23:54:51.243587   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244058   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:51.244098   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:51.244008   61077 retry.go:31] will retry after 2.437065869s: waiting for machine to come up
	I0116 23:54:53.683433   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683851   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:53.683882   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:53.683823   61077 retry.go:31] will retry after 3.130209662s: waiting for machine to come up
	I0116 23:54:49.163895   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.351314   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.362966   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:49.664243   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:49.664369   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:49.683487   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.163655   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.163757   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.180005   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:50.664531   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:50.664611   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:50.680106   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.163758   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.163894   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.179982   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:51.664626   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:51.664708   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:51.676699   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.163544   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.163670   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.180656   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:52.663792   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:52.663880   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:52.678849   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.164052   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.164169   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.178666   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.664220   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:53.664316   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:53.678867   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:53.899990   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:54:53.919132   59938 api_server.go:72] duration metric: took 2.51975517s to wait for apiserver process to appear ...
	I0116 23:54:53.919159   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:54:53.919179   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.905143   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.905180   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.905196   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.941657   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.941684   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:56.941697   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:56.986154   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:54:56.986183   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:54:57.419788   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.424352   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.424379   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:57.919987   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:57.926989   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:54:57.927013   59938 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:54:58.420219   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:54:58.426904   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:54:58.435007   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:54:58.435038   59938 api_server.go:131] duration metric: took 4.515871856s to wait for apiserver health ...
	I0116 23:54:58.435051   59938 cni.go:84] Creating CNI manager for ""
	I0116 23:54:58.435061   59938 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:54:58.437150   59938 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:54:58.438936   59938 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:54:58.455657   59938 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:54:58.508821   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:54:58.522305   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:54:58.522361   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:54:58.522372   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:54:58.522386   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:54:58.522403   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:54:58.522414   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:54:58.522428   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:54:58.522440   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:54:58.522449   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:54:58.522459   59938 system_pods.go:74] duration metric: took 13.604825ms to wait for pod list to return data ...
	I0116 23:54:58.522472   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:54:58.525739   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:54:58.525780   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:54:58.525802   59938 node_conditions.go:105] duration metric: took 3.32348ms to run NodePressure ...
	I0116 23:54:58.525836   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:56.815572   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816189   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | unable to find current IP address of domain default-k8s-diff-port-967325 in network mk-default-k8s-diff-port-967325
	I0116 23:54:56.816215   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | I0116 23:54:56.816141   61077 retry.go:31] will retry after 4.356544243s: waiting for machine to come up
	I0116 23:54:54.164263   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.164410   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.179137   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:54.663638   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:54.663755   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:54.678463   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.163957   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.164041   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.177018   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:55.663543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:55.663648   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:55.674693   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.164347   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.164456   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.175674   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:56.664319   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:56.664402   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:56.675373   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.164471   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.164576   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.176504   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:57.664144   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:57.664251   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:57.676983   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.164543   60073 api_server.go:166] Checking apiserver status ...
	I0116 23:54:58.164621   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:54:58.176779   60073 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:54:58.176811   60073 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:54:58.176821   60073 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:54:58.176833   60073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:54:58.176899   60073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:54:58.214453   60073 cri.go:89] found id: ""
	I0116 23:54:58.214526   60073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:54:58.232076   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:54:58.240808   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:54:58.240879   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.249983   60073 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:54:58.250013   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.373313   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:58.857922   59938 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862719   59938 kubeadm.go:787] kubelet initialised
	I0116 23:54:58.862738   59938 kubeadm.go:788] duration metric: took 4.782925ms waiting for restarted kubelet to initialise ...
	I0116 23:54:58.862746   59938 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:54:58.869022   59938 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.874505   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874535   59938 pod_ready.go:81] duration metric: took 5.485562ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.874546   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "coredns-76f75df574-ptq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.874554   59938 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.879329   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879355   59938 pod_ready.go:81] duration metric: took 4.787755ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.879363   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "etcd-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.879368   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.883928   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883949   59938 pod_ready.go:81] duration metric: took 4.571713ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.883961   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-apiserver-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.883969   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:58.912868   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912894   59938 pod_ready.go:81] duration metric: took 28.911722ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:58.912907   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:58.912915   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.313029   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313069   59938 pod_ready.go:81] duration metric: took 400.142619ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.313082   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-proxy-64z5c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.313090   59938 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:54:59.712991   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713014   59938 pod_ready.go:81] duration metric: took 399.912003ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	E0116 23:54:59.713023   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "kube-scheduler-no-preload-085322" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:54:59.713028   59938 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:00.114190   59938 pod_ready.go:97] node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114215   59938 pod_ready.go:81] duration metric: took 401.177651ms waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:00.114225   59938 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-085322" hosting pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:00.114231   59938 pod_ready.go:38] duration metric: took 1.251475914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:00.114247   59938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:00.127362   59938 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:00.127388   59938 kubeadm.go:640] restartCluster took 20.534611532s
	I0116 23:55:00.127403   59938 kubeadm.go:406] StartCluster complete in 20.579733794s
	I0116 23:55:00.127422   59938 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.127503   59938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:00.129224   59938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:00.129463   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:00.130188   59938 config.go:182] Loaded profile config "no-preload-085322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 23:55:00.129546   59938 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:00.130489   59938 addons.go:69] Setting storage-provisioner=true in profile "no-preload-085322"
	I0116 23:55:00.130520   59938 addons.go:234] Setting addon storage-provisioner=true in "no-preload-085322"
	W0116 23:55:00.130550   59938 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:00.130626   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.131148   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.131179   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.131603   59938 addons.go:69] Setting default-storageclass=true in profile "no-preload-085322"
	I0116 23:55:00.131662   59938 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-085322"
	I0116 23:55:00.132229   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.132282   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.132642   59938 addons.go:69] Setting metrics-server=true in profile "no-preload-085322"
	I0116 23:55:00.132682   59938 addons.go:234] Setting addon metrics-server=true in "no-preload-085322"
	W0116 23:55:00.132691   59938 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:00.132738   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.133280   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.133322   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.137759   59938 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-085322" context rescaled to 1 replicas
	I0116 23:55:00.137827   59938 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:00.139774   59938 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:00.141410   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:00.150892   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0116 23:55:00.151398   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.151952   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.151970   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.152274   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0116 23:55:00.152458   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0116 23:55:00.152489   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.152695   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.152865   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153081   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.153356   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153401   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153541   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.153583   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.153867   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.153942   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.154667   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.154714   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.155326   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.155362   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.156980   59938 addons.go:234] Setting addon default-storageclass=true in "no-preload-085322"
	W0116 23:55:00.157007   59938 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:00.157043   59938 host.go:66] Checking if "no-preload-085322" exists ...
	I0116 23:55:00.157421   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.157529   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.174130   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0116 23:55:00.174627   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.175185   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.175204   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.175566   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.175814   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.175862   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0116 23:55:00.176349   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.176936   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.176948   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.177295   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.177469   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.177631   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.179319   59938 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:00.180744   59938 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.180762   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:00.180777   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.179023   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.182381   59938 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:00.183551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:00.183564   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:00.183585   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.183692   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184112   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.184133   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.184250   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.184767   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.184932   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.185450   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.186460   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.186779   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.186812   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.187038   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.187221   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.187328   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.187452   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.189369   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0116 23:55:00.189703   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.190080   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.190091   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.190478   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.190890   59938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:00.190930   59938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:00.205734   59938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0116 23:55:00.206238   59938 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:00.206799   59938 main.go:141] libmachine: Using API Version  1
	I0116 23:55:00.206818   59938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:00.207212   59938 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:00.207446   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetState
	I0116 23:55:00.208811   59938 main.go:141] libmachine: (no-preload-085322) Calling .DriverName
	I0116 23:55:00.209063   59938 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.209077   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:00.209094   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHHostname
	I0116 23:55:00.211899   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212297   59938 main.go:141] libmachine: (no-preload-085322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:25:4d", ip: ""} in network mk-no-preload-085322: {Iface:virbr2 ExpiryTime:2024-01-17 00:54:14 +0000 UTC Type:0 Mac:52:54:00:57:25:4d Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:no-preload-085322 Clientid:01:52:54:00:57:25:4d}
	I0116 23:55:00.212323   59938 main.go:141] libmachine: (no-preload-085322) DBG | domain no-preload-085322 has defined IP address 192.168.50.183 and MAC address 52:54:00:57:25:4d in network mk-no-preload-085322
	I0116 23:55:00.212575   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHPort
	I0116 23:55:00.212826   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHKeyPath
	I0116 23:55:00.213095   59938 main.go:141] libmachine: (no-preload-085322) Calling .GetSSHUsername
	I0116 23:55:00.213275   59938 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/no-preload-085322/id_rsa Username:docker}
	I0116 23:55:00.307298   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:00.335551   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:00.335575   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:00.372999   59938 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:00.373001   59938 node_ready.go:35] waiting up to 6m0s for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:00.378131   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:00.378152   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:00.380282   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:00.401018   59938 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:00.401069   59938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:00.426132   59938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.093491344s)
	I0116 23:55:01.400832   59938 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020515974s)
	I0116 23:55:01.400920   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400937   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400965   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.400993   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.400886   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401092   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401295   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401313   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401324   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401334   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401360   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401402   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401416   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401417   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401426   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401436   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401448   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401458   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401468   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.401476   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.401725   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401757   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401781   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.401789   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401797   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.401950   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.401973   59938 addons.go:470] Verifying addon metrics-server=true in "no-preload-085322"
	I0116 23:55:01.403136   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.403161   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.403172   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.410263   59938 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:01.410287   59938 main.go:141] libmachine: (no-preload-085322) Calling .Close
	I0116 23:55:01.410536   59938 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:01.410575   59938 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:01.410578   59938 main.go:141] libmachine: (no-preload-085322) DBG | Closing plugin on server side
	I0116 23:55:01.412923   59938 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0116 23:55:02.567723   59622 start.go:369] acquired machines lock for "old-k8s-version-771669" in 54.450397128s
	I0116 23:55:02.567772   59622 start.go:96] Skipping create...Using existing machine configuration
	I0116 23:55:02.567779   59622 fix.go:54] fixHost starting: 
	I0116 23:55:02.568183   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:02.568215   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:02.587692   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0116 23:55:02.588096   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:02.588571   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:02.588590   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:02.588934   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:02.589163   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:02.589273   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:02.590929   59622 fix.go:102] recreateIfNeeded on old-k8s-version-771669: state=Stopped err=<nil>
	I0116 23:55:02.591002   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	W0116 23:55:02.591207   59622 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 23:55:02.593233   59622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-771669" ...
	I0116 23:55:01.414436   59938 addons.go:505] enable addons completed in 1.284891826s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0116 23:55:02.377542   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:01.175656   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Found IP for machine: 192.168.61.144
	I0116 23:55:01.176276   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has current primary IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.176287   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserving static IP address...
	I0116 23:55:01.176764   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Reserved static IP address: 192.168.61.144
	I0116 23:55:01.176803   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.176821   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Waiting for SSH to be available...
	I0116 23:55:01.176849   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | skip adding static IP to network mk-default-k8s-diff-port-967325 - found existing host DHCP lease matching {name: "default-k8s-diff-port-967325", mac: "52:54:00:31:00:23", ip: "192.168.61.144"}
	I0116 23:55:01.176862   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Getting to WaitForSSH function...
	I0116 23:55:01.179585   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180052   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.180086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.180201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH client type: external
	I0116 23:55:01.180225   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa (-rw-------)
	I0116 23:55:01.180258   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:01.180280   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | About to run SSH command:
	I0116 23:55:01.180298   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | exit 0
	I0116 23:55:01.287063   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:01.287361   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetConfigRaw
	I0116 23:55:01.288015   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.291188   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291601   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.291651   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.291892   60269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/config.json ...
	I0116 23:55:01.292147   60269 machine.go:88] provisioning docker machine ...
	I0116 23:55:01.292171   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:01.292392   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292603   60269 buildroot.go:166] provisioning hostname "default-k8s-diff-port-967325"
	I0116 23:55:01.292631   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.292795   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.295688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.296107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.296214   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.296399   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296557   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.296732   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.296957   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.297484   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.297508   60269 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-967325 && echo "default-k8s-diff-port-967325" | sudo tee /etc/hostname
	I0116 23:55:01.444451   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-967325
	
	I0116 23:55:01.444484   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.447658   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448083   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.448130   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.448237   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.448482   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448670   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.448836   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.449035   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.449518   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.449549   60269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-967325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-967325/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-967325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:01.592961   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:01.592998   60269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:01.593037   60269 buildroot.go:174] setting up certificates
	I0116 23:55:01.593052   60269 provision.go:83] configureAuth start
	I0116 23:55:01.593066   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetMachineName
	I0116 23:55:01.593369   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:01.596637   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597053   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.597093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.597236   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.599945   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600294   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.600332   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.600435   60269 provision.go:138] copyHostCerts
	I0116 23:55:01.600492   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:01.600500   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:01.600560   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:01.600653   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:01.600657   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:01.600675   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:01.600733   60269 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:01.600736   60269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:01.600751   60269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:01.600807   60269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-967325 san=[192.168.61.144 192.168.61.144 localhost 127.0.0.1 minikube default-k8s-diff-port-967325]
	I0116 23:55:01.777575   60269 provision.go:172] copyRemoteCerts
	I0116 23:55:01.777655   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:01.777685   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.780729   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781077   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.781117   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.781323   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.781493   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.781672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.781817   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:01.875542   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:01.898144   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 23:55:01.923770   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:01.947374   60269 provision.go:86] duration metric: configureAuth took 354.306627ms
	I0116 23:55:01.947400   60269 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:01.947656   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:55:01.947752   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:01.950688   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951006   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:01.951031   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:01.951309   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:01.951475   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:01.951846   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:01.952024   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:01.952549   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:01.952575   60269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:02.296465   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:02.296504   60269 machine.go:91] provisioned docker machine in 1.004340116s
	I0116 23:55:02.296517   60269 start.go:300] post-start starting for "default-k8s-diff-port-967325" (driver="kvm2")
	I0116 23:55:02.296533   60269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:02.296559   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.296898   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:02.296931   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.299843   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.300330   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.300424   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.300613   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.300813   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.300988   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.392380   60269 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:02.396719   60269 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:02.396746   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:02.396840   60269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:02.396931   60269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:02.397013   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:02.405217   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:02.428260   60269 start.go:303] post-start completed in 131.726459ms
	I0116 23:55:02.428289   60269 fix.go:56] fixHost completed within 20.901025477s
	I0116 23:55:02.428351   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.431541   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.431904   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.431935   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.432124   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.432327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432495   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.432679   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.432865   60269 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:02.433181   60269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0116 23:55:02.433200   60269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:02.567559   60269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449302.518065106
	
	I0116 23:55:02.567583   60269 fix.go:206] guest clock: 1705449302.518065106
	I0116 23:55:02.567592   60269 fix.go:219] Guest: 2024-01-16 23:55:02.518065106 +0000 UTC Remote: 2024-01-16 23:55:02.428292966 +0000 UTC m=+263.717566224 (delta=89.77214ms)
	I0116 23:55:02.567628   60269 fix.go:190] guest clock delta is within tolerance: 89.77214ms
	I0116 23:55:02.567634   60269 start.go:83] releasing machines lock for "default-k8s-diff-port-967325", held for 21.040406039s
	I0116 23:55:02.567676   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.567951   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:02.571196   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.571612   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.571641   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.572815   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573415   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573626   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0116 23:55:02.573709   60269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:02.573777   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.573935   60269 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:02.573963   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0116 23:55:02.577057   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577347   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577687   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577741   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577786   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:02.577804   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:02.577976   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578023   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0116 23:55:02.578172   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578201   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0116 23:55:02.578358   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578359   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0116 23:55:02.578488   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.578514   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0116 23:55:02.707601   60269 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:02.715420   60269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:02.871362   60269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:02.878362   60269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:02.878438   60269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:02.898508   60269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:02.898534   60269 start.go:475] detecting cgroup driver to use...
	I0116 23:55:02.898627   60269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:02.915544   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:02.929881   60269 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:02.929948   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:02.946126   60269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:02.963314   60269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:03.087669   60269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:03.231908   60269 docker.go:233] disabling docker service ...
	I0116 23:55:03.232001   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:03.247745   60269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:03.263573   60269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:03.394931   60269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:03.533725   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:03.550475   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:03.571922   60269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 23:55:03.571984   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.584086   60269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:03.584195   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.595191   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.604671   60269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:03.614076   60269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:03.623637   60269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:03.632143   60269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:03.632225   60269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:03.645964   60269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:03.657719   60269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:54:59.164409   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.363424   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.434315   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:54:59.505227   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:54:59.505321   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.006175   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:00.505693   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.005697   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:01.505467   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.005808   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:02.033017   60073 api_server.go:72] duration metric: took 2.527792184s to wait for apiserver process to appear ...
	I0116 23:55:02.033039   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:02.033056   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:03.785123   60269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:03.976744   60269 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:03.976819   60269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:03.981545   60269 start.go:543] Will wait 60s for crictl version
	I0116 23:55:03.981598   60269 ssh_runner.go:195] Run: which crictl
	I0116 23:55:03.985233   60269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:04.033443   60269 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:04.033541   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.087776   60269 ssh_runner.go:195] Run: crio --version
	I0116 23:55:04.142302   60269 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 23:55:02.594568   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Start
	I0116 23:55:02.594750   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring networks are active...
	I0116 23:55:02.595457   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network default is active
	I0116 23:55:02.595812   59622 main.go:141] libmachine: (old-k8s-version-771669) Ensuring network mk-old-k8s-version-771669 is active
	I0116 23:55:02.596285   59622 main.go:141] libmachine: (old-k8s-version-771669) Getting domain xml...
	I0116 23:55:02.597150   59622 main.go:141] libmachine: (old-k8s-version-771669) Creating domain...
	I0116 23:55:03.999986   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting to get IP...
	I0116 23:55:04.001060   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.001581   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.001663   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.001550   61289 retry.go:31] will retry after 298.561748ms: waiting for machine to come up
	I0116 23:55:04.302120   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.302820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.302847   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.302767   61289 retry.go:31] will retry after 342.293835ms: waiting for machine to come up
	I0116 23:55:04.646424   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:04.647107   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:04.647133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:04.647055   61289 retry.go:31] will retry after 395.611503ms: waiting for machine to come up
	I0116 23:55:05.046785   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.047276   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.047304   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.047189   61289 retry.go:31] will retry after 552.22886ms: waiting for machine to come up
	I0116 23:55:07.029353   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.029384   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.029401   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.187789   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.187830   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.187877   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.197889   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:07.197924   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:07.533214   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:07.540976   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:07.541008   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.033550   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.044749   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:08.044779   60073 api_server.go:103] status: https://192.168.39.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:08.533231   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0116 23:55:08.540197   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0116 23:55:08.551065   60073 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:08.551108   60073 api_server.go:131] duration metric: took 6.518060223s to wait for apiserver health ...
	I0116 23:55:08.551119   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:55:08.551128   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:08.553370   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:04.377661   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:06.377732   59938 node_ready.go:58] node "no-preload-085322" has status "Ready":"False"
	I0116 23:55:07.377978   59938 node_ready.go:49] node "no-preload-085322" has status "Ready":"True"
	I0116 23:55:07.378007   59938 node_ready.go:38] duration metric: took 7.004955625s waiting for node "no-preload-085322" to be "Ready" ...
	I0116 23:55:07.378019   59938 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:07.394319   59938 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401604   59938 pod_ready.go:92] pod "coredns-76f75df574-ptq95" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.401634   59938 pod_ready.go:81] duration metric: took 7.260618ms waiting for pod "coredns-76f75df574-ptq95" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.401647   59938 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412094   59938 pod_ready.go:92] pod "etcd-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.412123   59938 pod_ready.go:81] duration metric: took 10.46753ms waiting for pod "etcd-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.412137   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922096   59938 pod_ready.go:92] pod "kube-apiserver-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.922169   59938 pod_ready.go:81] duration metric: took 510.023791ms waiting for pod "kube-apiserver-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.922208   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929615   59938 pod_ready.go:92] pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:07.929645   59938 pod_ready.go:81] duration metric: took 7.422332ms waiting for pod "kube-controller-manager-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:07.929659   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178529   59938 pod_ready.go:92] pod "kube-proxy-64z5c" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.178558   59938 pod_ready.go:81] duration metric: took 248.89013ms waiting for pod "kube-proxy-64z5c" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.178572   59938 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:04.144239   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetIP
	I0116 23:55:04.147395   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.147816   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0116 23:55:04.147864   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0116 23:55:04.148032   60269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:04.152106   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:04.166312   60269 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 23:55:04.166412   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:04.207955   60269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 23:55:04.208024   60269 ssh_runner.go:195] Run: which lz4
	I0116 23:55:04.211817   60269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:04.215791   60269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:04.215816   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 23:55:06.109275   60269 crio.go:444] Took 1.897478 seconds to copy over tarball
	I0116 23:55:06.109361   60269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:08.555066   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:08.584102   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:08.660533   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:08.680559   60073 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:08.680588   60073 system_pods.go:61] "coredns-5dd5756b68-49p2f" [5241a39a-599e-4ae2-b8c8-7494382819d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:08.680595   60073 system_pods.go:61] "etcd-embed-certs-837871" [99fce5e6-124e-4e96-b722-41c0be595863] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:08.680603   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [7bf73dd6-7f27-482a-896a-a5097bd047a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:08.680609   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [be8f34fb-2d00-4c86-aab3-c4d74d92d42c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:08.680615   60073 system_pods.go:61] "kube-proxy-nglts" [3ec00f1a-258b-4da3-9b41-dbd96156de04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:08.680624   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [f9af2c43-cb66-4ebb-b23c-4f898be33d64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:08.680669   60073 system_pods.go:61] "metrics-server-57f55c9bc5-npd7s" [5aa75079-2c85-4fde-ba88-9ae5bb73ecc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:08.680678   60073 system_pods.go:61] "storage-provisioner" [5bae4d8b-030b-4476-8aa6-f4a66a8f80a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 23:55:08.680685   60073 system_pods.go:74] duration metric: took 20.127241ms to wait for pod list to return data ...
	I0116 23:55:08.680695   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:08.685562   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:08.685594   60073 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:08.685604   60073 node_conditions.go:105] duration metric: took 4.905393ms to run NodePressure ...
	I0116 23:55:08.685622   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:05.600887   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:05.601408   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:05.601444   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:05.601312   61289 retry.go:31] will retry after 584.67072ms: waiting for machine to come up
	I0116 23:55:06.188018   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:06.188524   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:06.188550   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:06.188434   61289 retry.go:31] will retry after 859.064841ms: waiting for machine to come up
	I0116 23:55:07.048810   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:07.049461   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:07.049491   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:07.049417   61289 retry.go:31] will retry after 1.064800753s: waiting for machine to come up
	I0116 23:55:08.115741   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:08.116406   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:08.116430   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:08.116372   61289 retry.go:31] will retry after 1.289118736s: waiting for machine to come up
	I0116 23:55:09.407820   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:09.408291   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:09.408319   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:09.408262   61289 retry.go:31] will retry after 1.623353195s: waiting for machine to come up
	I0116 23:55:08.979310   59938 pod_ready.go:92] pod "kube-scheduler-no-preload-085322" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:08.979407   59938 pod_ready.go:81] duration metric: took 800.824219ms waiting for pod "kube-scheduler-no-preload-085322" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:08.979438   59938 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.546193   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:09.452388   60269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342992298s)
	I0116 23:55:09.452415   60269 crio.go:451] Took 3.343109 seconds to extract the tarball
	I0116 23:55:09.452423   60269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:09.497202   60269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:09.552426   60269 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 23:55:09.552460   60269 cache_images.go:84] Images are preloaded, skipping loading
	I0116 23:55:09.552532   60269 ssh_runner.go:195] Run: crio config
	I0116 23:55:09.623685   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:09.623716   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:09.623743   60269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:09.623767   60269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-967325 NodeName:default-k8s-diff-port-967325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 23:55:09.623938   60269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-967325"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:09.624024   60269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-967325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 23:55:09.624079   60269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 23:55:09.632768   60269 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:09.632838   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:09.642978   60269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 23:55:09.660304   60269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:09.677864   60269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 23:55:09.699234   60269 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:09.703170   60269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:09.718511   60269 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325 for IP: 192.168.61.144
	I0116 23:55:09.718551   60269 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:09.718727   60269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:09.718798   60269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:09.718895   60269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/client.key
	I0116 23:55:09.718975   60269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key.a430fbc2
	I0116 23:55:09.719039   60269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key
	I0116 23:55:09.719175   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:09.719225   60269 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:09.719240   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:09.719283   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:09.719318   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:09.719358   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:09.719416   60269 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:09.720339   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:09.748578   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 23:55:09.778396   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:09.803745   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/default-k8s-diff-port-967325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 23:55:09.828009   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:09.850951   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:09.874273   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:09.897385   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:09.923319   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:09.946301   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:09.970778   60269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:09.994497   60269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:10.013259   60269 ssh_runner.go:195] Run: openssl version
	I0116 23:55:10.020357   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:10.032324   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037071   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.037122   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:10.043220   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:10.052796   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:10.063065   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.067904   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.068000   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:10.074570   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:10.087080   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:10.099734   60269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105299   60269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.105360   60269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:10.112084   60269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:10.123175   60269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:10.127669   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:10.133522   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:10.139085   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:10.145018   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:10.150920   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:10.156719   60269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:10.162808   60269 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-967325 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-967325 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:10.162893   60269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:10.162936   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:10.208917   60269 cri.go:89] found id: ""
	I0116 23:55:10.209008   60269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:10.221689   60269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:10.221710   60269 kubeadm.go:636] restartCluster start
	I0116 23:55:10.221776   60269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:10.233762   60269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.234916   60269 kubeconfig.go:92] found "default-k8s-diff-port-967325" server: "https://192.168.61.144:8444"
	I0116 23:55:10.237484   60269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:10.246418   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.246495   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.257759   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:10.747378   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:10.747466   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:10.761884   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.247445   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.247543   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.258490   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:11.747483   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:11.747623   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:11.764389   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.246997   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.247122   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.262538   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:12.747219   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:12.747387   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:12.762535   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.246636   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.246705   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.258883   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:13.747504   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:13.747588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:13.759640   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:09.229704   60073 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224745   60073 kubeadm.go:787] kubelet initialised
	I0116 23:55:10.224771   60073 kubeadm.go:788] duration metric: took 994.984702ms waiting for restarted kubelet to initialise ...
	I0116 23:55:10.224781   60073 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:11.348058   60073 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.356516   60073 pod_ready.go:102] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:13.856540   60073 pod_ready.go:92] pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:13.856573   60073 pod_ready.go:81] duration metric: took 2.508479475s waiting for pod "coredns-5dd5756b68-49p2f" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:13.856586   60073 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:11.033009   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:11.033544   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:11.033588   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:11.033487   61289 retry.go:31] will retry after 1.553841353s: waiting for machine to come up
	I0116 23:55:12.588794   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:12.589269   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:12.589297   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:12.589245   61289 retry.go:31] will retry after 1.907517113s: waiting for machine to come up
	I0116 23:55:14.499305   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:14.499734   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:14.499759   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:14.499683   61289 retry.go:31] will retry after 3.406811143s: waiting for machine to come up
	I0116 23:55:13.986208   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:15.987948   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:18.490012   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:14.247197   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.247299   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.262013   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:14.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:14.746558   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:14.761452   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.246988   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.247075   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.261345   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.747524   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:15.747618   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:15.760291   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.246551   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.246648   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.260545   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:16.746471   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:16.746585   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:16.758637   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.247227   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.247331   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.258514   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:17.747046   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:17.747138   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:17.758877   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.247489   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.247561   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.259581   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:18.747241   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:18.747335   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:18.759146   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:15.867702   60073 pod_ready.go:102] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:17.864681   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.864706   60073 pod_ready.go:81] duration metric: took 4.008111977s waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.864718   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873106   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.873127   60073 pod_ready.go:81] duration metric: took 8.400576ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.873136   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878501   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.878519   60073 pod_ready.go:81] duration metric: took 5.375395ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.878535   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883653   60073 pod_ready.go:92] pod "kube-proxy-nglts" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.883669   60073 pod_ready.go:81] duration metric: took 5.128525ms waiting for pod "kube-proxy-nglts" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.883680   60073 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.888978   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:17.888996   60073 pod_ready.go:81] duration metric: took 5.309484ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.889011   60073 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:17.908092   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:17.908486   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | unable to find current IP address of domain old-k8s-version-771669 in network mk-old-k8s-version-771669
	I0116 23:55:17.908520   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | I0116 23:55:17.908432   61289 retry.go:31] will retry after 3.983135021s: waiting for machine to come up
	I0116 23:55:20.987833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:22.989682   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:19.246437   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.246547   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.257900   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:19.746450   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:19.746572   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:19.758509   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.247334   60269 api_server.go:166] Checking apiserver status ...
	I0116 23:55:20.247418   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:20.258909   60269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:20.258939   60269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:20.258948   60269 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:20.258958   60269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:20.259023   60269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:20.300659   60269 cri.go:89] found id: ""
	I0116 23:55:20.300740   60269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:20.315326   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:20.323563   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:20.323629   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331846   60269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:20.331871   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:20.443085   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.556705   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.113585461s)
	I0116 23:55:21.556730   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.745024   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.824910   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:21.916770   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:21.916856   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.416983   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:22.917411   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:23.417012   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:19.896636   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.898504   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:21.896143   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896665   59622 main.go:141] libmachine: (old-k8s-version-771669) Found IP for machine: 192.168.72.114
	I0116 23:55:21.896717   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has current primary IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.896729   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserving static IP address...
	I0116 23:55:21.897128   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.897157   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | skip adding static IP to network mk-old-k8s-version-771669 - found existing host DHCP lease matching {name: "old-k8s-version-771669", mac: "52:54:00:31:a2:e8", ip: "192.168.72.114"}
	I0116 23:55:21.897174   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Getting to WaitForSSH function...
	I0116 23:55:21.897194   59622 main.go:141] libmachine: (old-k8s-version-771669) Reserved static IP address: 192.168.72.114
	I0116 23:55:21.897207   59622 main.go:141] libmachine: (old-k8s-version-771669) Waiting for SSH to be available...
	I0116 23:55:21.900064   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900492   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:21.900531   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:21.900775   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH client type: external
	I0116 23:55:21.900805   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa (-rw-------)
	I0116 23:55:21.900835   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 23:55:21.900852   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | About to run SSH command:
	I0116 23:55:21.900867   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | exit 0
	I0116 23:55:22.002573   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | SSH cmd err, output: <nil>: 
	I0116 23:55:22.003051   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetConfigRaw
	I0116 23:55:22.003790   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.007208   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.007726   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.007947   59622 profile.go:148] Saving config to /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/config.json ...
	I0116 23:55:22.008199   59622 machine.go:88] provisioning docker machine ...
	I0116 23:55:22.008225   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.008439   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008649   59622 buildroot.go:166] provisioning hostname "old-k8s-version-771669"
	I0116 23:55:22.008672   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.008859   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.011893   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012288   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.012321   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.012475   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.012655   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.012825   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.013009   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.013176   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.013645   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.013669   59622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-771669 && echo "old-k8s-version-771669" | sudo tee /etc/hostname
	I0116 23:55:22.159863   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-771669
	
	I0116 23:55:22.159897   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.162806   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163257   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.163296   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.163483   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.163700   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.163882   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.164023   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.164179   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.164551   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.164569   59622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-771669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-771669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-771669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 23:55:22.309881   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 23:55:22.309914   59622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17975-6238/.minikube CaCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17975-6238/.minikube}
	I0116 23:55:22.309935   59622 buildroot.go:174] setting up certificates
	I0116 23:55:22.309945   59622 provision.go:83] configureAuth start
	I0116 23:55:22.309957   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetMachineName
	I0116 23:55:22.310198   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:22.312567   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.312901   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.312930   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.313107   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.315382   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.315767   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.315807   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.316000   59622 provision.go:138] copyHostCerts
	I0116 23:55:22.316043   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem, removing ...
	I0116 23:55:22.316053   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem
	I0116 23:55:22.316116   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/cert.pem (1123 bytes)
	I0116 23:55:22.316202   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem, removing ...
	I0116 23:55:22.316210   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem
	I0116 23:55:22.316228   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/key.pem (1679 bytes)
	I0116 23:55:22.316289   59622 exec_runner.go:144] found /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem, removing ...
	I0116 23:55:22.316296   59622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem
	I0116 23:55:22.316312   59622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17975-6238/.minikube/ca.pem (1082 bytes)
	I0116 23:55:22.316365   59622 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-771669 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube old-k8s-version-771669]
	I0116 23:55:22.437253   59622 provision.go:172] copyRemoteCerts
	I0116 23:55:22.437325   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 23:55:22.437348   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.440075   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440363   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.440390   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.440626   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.440808   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.440960   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.441145   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:22.536222   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 23:55:22.562061   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 23:55:22.586856   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 23:55:22.610936   59622 provision.go:86] duration metric: configureAuth took 300.975023ms
	I0116 23:55:22.610965   59622 buildroot.go:189] setting minikube options for container-runtime
	I0116 23:55:22.611217   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 23:55:22.611306   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.614770   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615218   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.615253   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.615508   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.615738   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.615931   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.616078   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.616259   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:22.616622   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:22.616641   59622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 23:55:22.958075   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 23:55:22.958102   59622 machine.go:91] provisioned docker machine in 949.885683ms
	I0116 23:55:22.958121   59622 start.go:300] post-start starting for "old-k8s-version-771669" (driver="kvm2")
	I0116 23:55:22.958136   59622 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 23:55:22.958160   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:22.958492   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 23:55:22.958528   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:22.961489   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.961850   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:22.961879   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:22.962042   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:22.962232   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:22.962423   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:22.962585   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.058948   59622 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 23:55:23.063281   59622 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 23:55:23.063309   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/addons for local assets ...
	I0116 23:55:23.063383   59622 filesync.go:126] Scanning /home/jenkins/minikube-integration/17975-6238/.minikube/files for local assets ...
	I0116 23:55:23.063477   59622 filesync.go:149] local asset: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem -> 149302.pem in /etc/ssl/certs
	I0116 23:55:23.063589   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 23:55:23.075280   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:23.099934   59622 start.go:303] post-start completed in 141.796411ms
	I0116 23:55:23.099963   59622 fix.go:56] fixHost completed within 20.532183026s
	I0116 23:55:23.099986   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.102938   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103320   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.103355   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.103471   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.103682   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103837   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.103981   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.104148   59622 main.go:141] libmachine: Using SSH client type: native
	I0116 23:55:23.104525   59622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0116 23:55:23.104539   59622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 23:55:23.239875   59622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705449323.216935077
	
	I0116 23:55:23.239947   59622 fix.go:206] guest clock: 1705449323.216935077
	I0116 23:55:23.239963   59622 fix.go:219] Guest: 2024-01-16 23:55:23.216935077 +0000 UTC Remote: 2024-01-16 23:55:23.099966517 +0000 UTC m=+357.574360679 (delta=116.96856ms)
	I0116 23:55:23.239987   59622 fix.go:190] guest clock delta is within tolerance: 116.96856ms
	I0116 23:55:23.239994   59622 start.go:83] releasing machines lock for "old-k8s-version-771669", held for 20.672247822s
	I0116 23:55:23.240021   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.240303   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:23.243487   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.243962   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.243999   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.244245   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244731   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.244917   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:23.245023   59622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 23:55:23.245091   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.245237   59622 ssh_runner.go:195] Run: cat /version.json
	I0116 23:55:23.245261   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:23.248169   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248391   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248664   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.248691   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.248835   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.248936   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:23.249012   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:23.249043   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249196   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249284   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:23.249351   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.249454   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:23.249607   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:23.249737   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:23.380837   59622 ssh_runner.go:195] Run: systemctl --version
	I0116 23:55:23.387163   59622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 23:55:23.543350   59622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 23:55:23.550519   59622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 23:55:23.550587   59622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 23:55:23.565019   59622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 23:55:23.565046   59622 start.go:475] detecting cgroup driver to use...
	I0116 23:55:23.565125   59622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 23:55:23.579314   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 23:55:23.591247   59622 docker.go:217] disabling cri-docker service (if available) ...
	I0116 23:55:23.591310   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 23:55:23.605294   59622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 23:55:23.618799   59622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 23:55:23.742752   59622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 23:55:23.876604   59622 docker.go:233] disabling docker service ...
	I0116 23:55:23.876678   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 23:55:23.891240   59622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 23:55:23.906010   59622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 23:55:24.059751   59622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 23:55:24.186517   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 23:55:24.201344   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 23:55:24.218947   59622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 23:55:24.219014   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.230843   59622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 23:55:24.230917   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.243120   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.252562   59622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 23:55:24.264610   59622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 23:55:24.275702   59622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 23:55:24.284982   59622 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 23:55:24.285046   59622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 23:55:24.298681   59622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 23:55:24.307743   59622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 23:55:24.425125   59622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 23:55:24.597300   59622 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 23:55:24.597373   59622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 23:55:24.603241   59622 start.go:543] Will wait 60s for crictl version
	I0116 23:55:24.603314   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:24.607580   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 23:55:24.648923   59622 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 23:55:24.649022   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.696485   59622 ssh_runner.go:195] Run: crio --version
	I0116 23:55:24.754660   59622 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 23:55:24.756045   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetIP
	I0116 23:55:24.759033   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759392   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:24.759432   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:24.759771   59622 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 23:55:24.764448   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:24.777724   59622 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 23:55:24.777812   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:24.825020   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:24.825088   59622 ssh_runner.go:195] Run: which lz4
	I0116 23:55:24.829208   59622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 23:55:24.833495   59622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 23:55:24.833523   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 23:55:24.992848   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:27.488098   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:23.916961   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.417588   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:24.441144   60269 api_server.go:72] duration metric: took 2.5243712s to wait for apiserver process to appear ...
	I0116 23:55:24.441176   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:24.441198   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:24.441742   60269 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0116 23:55:24.941292   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.835831   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.835867   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.835882   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.868017   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:27.868058   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:27.942282   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:27.960876   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:27.960928   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:28.442258   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.449969   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.450001   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:24.397456   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:26.397862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.404313   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:28.941892   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:28.959617   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 23:55:28.959651   60269 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 23:55:29.441742   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0116 23:55:29.446933   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0116 23:55:29.455520   60269 api_server.go:141] control plane version: v1.28.4
	I0116 23:55:29.455548   60269 api_server.go:131] duration metric: took 5.014364838s to wait for apiserver health ...
	I0116 23:55:29.455561   60269 cni.go:84] Creating CNI manager for ""
	I0116 23:55:29.455569   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:29.457775   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:26.372140   59622 crio.go:444] Took 1.542968 seconds to copy over tarball
	I0116 23:55:26.372233   59622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 23:55:29.316720   59622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944443375s)
	I0116 23:55:29.316749   59622 crio.go:451] Took 2.944578 seconds to extract the tarball
	I0116 23:55:29.316760   59622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 23:55:29.359053   59622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 23:55:29.407438   59622 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 23:55:29.407466   59622 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 23:55:29.407526   59622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.407582   59622 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.407605   59622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.407624   59622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.407656   59622 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 23:55:29.407657   59622 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.407840   59622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.407530   59622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.409393   59622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 23:55:29.409457   59622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:29.409399   59622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.409480   59622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.409647   59622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.409675   59622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.409682   59622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.622629   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.626907   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.630596   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 23:55:29.633693   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.635868   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.644919   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.649358   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.724339   59622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 23:55:29.724400   59622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.724467   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.795647   59622 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 23:55:29.795694   59622 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.795747   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.844312   59622 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 23:55:29.844373   59622 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 23:55:29.844427   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849856   59622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 23:55:29.849876   59622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.849908   59622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.849911   59622 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 23:55:29.849928   59622 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.849956   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.849967   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850005   59622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 23:55:29.850030   59622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.850047   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 23:55:29.850062   59622 ssh_runner.go:195] Run: which crictl
	I0116 23:55:29.850101   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 23:55:29.852839   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 23:55:29.872722   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 23:55:29.872753   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 23:55:29.872821   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 23:55:29.872997   59622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 23:55:29.963139   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 23:55:29.967047   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 23:55:29.981726   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 23:55:30.047814   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 23:55:30.047906   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 23:55:30.047972   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 23:55:30.048002   59622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 23:55:30.281680   59622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:30.423881   59622 cache_images.go:92] LoadImages completed in 1.016396141s
	W0116 23:55:30.423996   59622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17975-6238/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0116 23:55:30.424113   59622 ssh_runner.go:195] Run: crio config
	I0116 23:55:30.486915   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:30.486935   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:30.486951   59622 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 23:55:30.486975   59622 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-771669 NodeName:old-k8s-version-771669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 23:55:30.487151   59622 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-771669"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-771669
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.114:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 23:55:30.487252   59622 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-771669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 23:55:30.487320   59622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 23:55:30.497629   59622 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 23:55:30.497706   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 23:55:30.505710   59622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 23:55:30.523292   59622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 23:55:30.539544   59622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 23:55:30.557436   59622 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0116 23:55:30.561329   59622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 23:55:29.488446   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:32.775251   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:29.459468   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:29.471218   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:29.488687   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:29.499433   60269 system_pods.go:59] 8 kube-system pods found
	I0116 23:55:29.499458   60269 system_pods.go:61] "coredns-5dd5756b68-7kwrd" [38a96fe5-70a8-46e6-b899-b39558e08855] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 23:55:29.499465   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [bc2e7805-71f2-4924-80d7-2dd853ebeea9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 23:55:29.499472   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [8c01f8da-0156-4d16-b5e7-262427171137] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 23:55:29.499484   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [04b93c96-ebc0-4257-b480-7be1ea9f7fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 23:55:29.499496   60269 system_pods.go:61] "kube-proxy-jmq58" [ec5c282f-04c8-4839-a16f-0a2024e0d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 23:55:29.499521   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [11e73d49-a3ba-44b3-9630-fd07fb23777f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 23:55:29.499533   60269 system_pods.go:61] "metrics-server-57f55c9bc5-bkbpm" [6ddb8af1-da20-4400-b6ba-6f0cf342b115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:55:29.499538   60269 system_pods.go:61] "storage-provisioner" [5b22598c-c5e0-4a9e-96f3-1732ecd018a1] Running
	I0116 23:55:29.499544   60269 system_pods.go:74] duration metric: took 10.840963ms to wait for pod list to return data ...
	I0116 23:55:29.499550   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:29.502918   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:29.502954   60269 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:29.502965   60269 node_conditions.go:105] duration metric: took 3.409475ms to run NodePressure ...
	I0116 23:55:29.502985   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:29.743687   60269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749616   60269 kubeadm.go:787] kubelet initialised
	I0116 23:55:29.749676   60269 kubeadm.go:788] duration metric: took 5.958924ms waiting for restarted kubelet to initialise ...
	I0116 23:55:29.749687   60269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:29.756788   60269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.762593   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762669   60269 pod_ready.go:81] duration metric: took 5.856721ms waiting for pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.762686   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "coredns-5dd5756b68-7kwrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.762695   60269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.768772   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768801   60269 pod_ready.go:81] duration metric: took 6.092773ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.768816   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.768824   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.775409   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775442   60269 pod_ready.go:81] duration metric: took 6.605139ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.775455   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.775463   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:29.902106   60269 pod_ready.go:97] node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902206   60269 pod_ready.go:81] duration metric: took 126.731712ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:29.902236   60269 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-967325" hosting pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-967325" has status "Ready":"False"
	I0116 23:55:29.902269   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829869   60269 pod_ready.go:92] pod "kube-proxy-jmq58" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:30.829891   60269 pod_ready.go:81] duration metric: took 927.598475ms waiting for pod "kube-proxy-jmq58" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:30.829900   60269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:32.831782   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.899557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:33.397105   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:30.574029   59622 certs.go:56] Setting up /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669 for IP: 192.168.72.114
	I0116 23:55:30.890778   59622 certs.go:190] acquiring lock for shared ca certs: {Name:mkc8804b72a15e0a5f4180ae51c705dafc626362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:30.890952   59622 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key
	I0116 23:55:30.891020   59622 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key
	I0116 23:55:30.891123   59622 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/client.key
	I0116 23:55:31.309085   59622 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key.9adeb8c5
	I0116 23:55:31.309205   59622 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key
	I0116 23:55:31.309360   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem (1338 bytes)
	W0116 23:55:31.309405   59622 certs.go:433] ignoring /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930_empty.pem, impossibly tiny 0 bytes
	I0116 23:55:31.309417   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 23:55:31.309461   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/ca.pem (1082 bytes)
	I0116 23:55:31.309514   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/cert.pem (1123 bytes)
	I0116 23:55:31.309547   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/certs/home/jenkins/minikube-integration/17975-6238/.minikube/certs/key.pem (1679 bytes)
	I0116 23:55:31.309606   59622 certs.go:437] found cert: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem (1708 bytes)
	I0116 23:55:31.310493   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 23:55:31.335886   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 23:55:31.358617   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 23:55:31.382183   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/old-k8s-version-771669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 23:55:31.407509   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 23:55:31.429683   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 23:55:31.453368   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 23:55:31.476083   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 23:55:31.499326   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 23:55:31.522939   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/certs/14930.pem --> /usr/share/ca-certificates/14930.pem (1338 bytes)
	I0116 23:55:31.548912   59622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/ssl/certs/149302.pem --> /usr/share/ca-certificates/149302.pem (1708 bytes)
	I0116 23:55:31.571716   59622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 23:55:31.587851   59622 ssh_runner.go:195] Run: openssl version
	I0116 23:55:31.593185   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14930.pem && ln -fs /usr/share/ca-certificates/14930.pem /etc/ssl/certs/14930.pem"
	I0116 23:55:31.602521   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.606986   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 22:47 /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.607049   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14930.pem
	I0116 23:55:31.612447   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14930.pem /etc/ssl/certs/51391683.0"
	I0116 23:55:31.622043   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149302.pem && ln -fs /usr/share/ca-certificates/149302.pem /etc/ssl/certs/149302.pem"
	I0116 23:55:31.631959   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636586   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 22:47 /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.636653   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149302.pem
	I0116 23:55:31.642415   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149302.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 23:55:31.651566   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 23:55:31.660990   59622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665574   59622 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.665624   59622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 23:55:31.671129   59622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 23:55:31.680951   59622 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 23:55:31.685144   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 23:55:31.690488   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 23:55:31.696140   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 23:55:31.702013   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 23:55:31.707887   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 23:55:31.713601   59622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 23:55:31.719957   59622 kubeadm.go:404] StartCluster: {Name:old-k8s-version-771669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-771669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 23:55:31.720050   59622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 23:55:31.720106   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:31.764090   59622 cri.go:89] found id: ""
	I0116 23:55:31.764179   59622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 23:55:31.772783   59622 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 23:55:31.772800   59622 kubeadm.go:636] restartCluster start
	I0116 23:55:31.772900   59622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 23:55:31.782951   59622 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:31.784108   59622 kubeconfig.go:92] found "old-k8s-version-771669" server: "https://192.168.72.114:8443"
	I0116 23:55:31.786822   59622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 23:55:31.795516   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:31.795564   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:31.806541   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.296087   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.296205   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.308136   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:32.796155   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:32.796250   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:32.812275   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.295834   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.295918   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.309867   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:33.796504   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:33.796592   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:33.808880   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.296500   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.296567   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.308101   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.795674   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:34.795765   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:34.808334   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:35.295900   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.295998   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.308522   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:34.987445   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:37.488388   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:34.836821   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:36.837242   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.896319   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.396168   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:35.796048   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:35.796157   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:35.809841   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.296449   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.296573   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.309339   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:36.795874   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:36.795953   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:36.810740   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.296322   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.296421   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.308384   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:37.796469   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:37.796576   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:37.810173   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.295663   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.295750   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.307391   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:38.795952   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:38.796050   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:38.809147   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.295669   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.295754   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.308210   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.796104   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:39.796226   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:39.808134   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:40.295713   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.295815   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.307552   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:39.986946   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.487118   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:38.838230   60269 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:39.837451   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0116 23:55:39.837475   60269 pod_ready.go:81] duration metric: took 9.007568234s waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:39.837495   60269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:41.844595   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.397089   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:42.896014   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:40.795619   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:40.795698   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:40.809529   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.296081   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.296153   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.309642   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.796355   59622 api_server.go:166] Checking apiserver status ...
	I0116 23:55:41.796439   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 23:55:41.808383   59622 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 23:55:41.808409   59622 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 23:55:41.808417   59622 kubeadm.go:1135] stopping kube-system containers ...
	I0116 23:55:41.808426   59622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 23:55:41.808480   59622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 23:55:41.851612   59622 cri.go:89] found id: ""
	I0116 23:55:41.851668   59622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 23:55:41.867103   59622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:55:41.876244   59622 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:55:41.876306   59622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886007   59622 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 23:55:41.886029   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.004968   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:42.972680   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.175241   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.242840   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:43.330848   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:55:43.330935   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:43.831021   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.331539   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:44.831545   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.331601   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:55:45.354248   59622 api_server.go:72] duration metric: took 2.023403352s to wait for apiserver process to appear ...
	I0116 23:55:45.354271   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:55:45.354287   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:45.354802   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": dial tcp 192.168.72.114:8443: connect: connection refused
	I0116 23:55:44.988114   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.486765   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:43.846368   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.848129   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:48.344150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:44.897147   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:47.396873   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:45.855032   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:50.855392   59622 api_server.go:269] stopped: https://192.168.72.114:8443/healthz: Get "https://192.168.72.114:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 23:55:50.855430   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.372327   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.372361   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.372383   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.429072   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 23:55:51.429102   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 23:55:51.854848   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:51.861367   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:51.861393   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.354990   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.360925   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 23:55:52.360951   59622 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 23:55:52.854778   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:55:52.861036   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:55:52.868982   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:55:52.869013   59622 api_server.go:131] duration metric: took 7.514729701s to wait for apiserver health ...
	I0116 23:55:52.869024   59622 cni.go:84] Creating CNI manager for ""
	I0116 23:55:52.869033   59622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:55:52.870842   59622 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:55:49.486891   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.489411   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:50.345462   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.345784   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:49.397270   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:51.397489   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:53.398253   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:52.872155   59622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:55:52.883251   59622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:55:52.904708   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:55:52.916515   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:55:52.916550   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:55:52.916558   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:55:52.916564   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:55:52.916571   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Pending
	I0116 23:55:52.916577   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:55:52.916584   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:55:52.916597   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:55:52.916606   59622 system_pods.go:74] duration metric: took 11.876364ms to wait for pod list to return data ...
	I0116 23:55:52.916618   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:55:52.920125   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:55:52.920158   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:55:52.920178   59622 node_conditions.go:105] duration metric: took 3.551281ms to run NodePressure ...
	I0116 23:55:52.920199   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 23:55:53.157112   59622 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161560   59622 kubeadm.go:787] kubelet initialised
	I0116 23:55:53.161590   59622 kubeadm.go:788] duration metric: took 4.45031ms waiting for restarted kubelet to initialise ...
	I0116 23:55:53.161601   59622 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:53.167210   59622 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.172679   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172705   59622 pod_ready.go:81] duration metric: took 5.453621ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.172713   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.172722   59622 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.178090   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178121   59622 pod_ready.go:81] duration metric: took 5.38864ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.178132   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "etcd-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.178141   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.183932   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183963   59622 pod_ready.go:81] duration metric: took 5.809315ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.183973   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.183979   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.309476   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309502   59622 pod_ready.go:81] duration metric: took 125.513469ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.309518   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.309526   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:53.710400   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710426   59622 pod_ready.go:81] duration metric: took 400.892114ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:53.710435   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-proxy-9ghls" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:53.710441   59622 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:55:54.108608   59622 pod_ready.go:97] node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108638   59622 pod_ready.go:81] duration metric: took 398.187187ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	E0116 23:55:54.108652   59622 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-771669" hosting pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:54.108661   59622 pod_ready.go:38] duration metric: took 947.048567ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:55:54.108682   59622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:55:54.128862   59622 ops.go:34] apiserver oom_adj: -16
	I0116 23:55:54.128889   59622 kubeadm.go:640] restartCluster took 22.356081524s
	I0116 23:55:54.128900   59622 kubeadm.go:406] StartCluster complete in 22.408946885s
	I0116 23:55:54.128919   59622 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.129004   59622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:55:54.131909   59622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:55:54.132201   59622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:55:54.132350   59622 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:55:54.132423   59622 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-771669"
	I0116 23:55:54.132445   59622 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-771669"
	I0116 23:55:54.132446   59622 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-771669"
	W0116 23:55:54.132457   59622 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:55:54.132467   59622 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:54.132468   59622 config.go:182] Loaded profile config "old-k8s-version-771669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0116 23:55:54.132479   59622 addons.go:243] addon metrics-server should already be in state true
	I0116 23:55:54.132520   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132551   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.132889   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.132943   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133041   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133083   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.133245   59622 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-771669"
	I0116 23:55:54.133294   59622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-771669"
	I0116 23:55:54.133724   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.133789   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.148645   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0116 23:55:54.148879   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0116 23:55:54.149227   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149356   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.149715   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149739   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.149900   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.149917   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.150032   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150210   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.150281   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.150883   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.150932   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.154047   59622 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-771669"
	W0116 23:55:54.154070   59622 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:55:54.154099   59622 host.go:66] Checking if "old-k8s-version-771669" exists ...
	I0116 23:55:54.154457   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.154502   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.156296   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0116 23:55:54.156719   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.157170   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.157199   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.157673   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.158266   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.158321   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.168301   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0116 23:55:54.168898   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.169505   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.169524   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.169888   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.170106   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.171966   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.174198   59622 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:55:54.173406   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0116 23:55:54.179587   59622 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.179605   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:55:54.179625   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.174560   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0116 23:55:54.180004   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180109   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.180627   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180653   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180768   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.180790   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.180993   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181177   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.181353   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.181578   59622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:55:54.181627   59622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:55:54.183580   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.185359   59622 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:55:54.184028   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.184548   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.186663   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:55:54.186672   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.186679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:55:54.186699   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.186698   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.186864   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.186964   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.187041   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.189698   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190070   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.190133   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.190266   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.190461   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.190582   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.190678   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.215481   59622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0116 23:55:54.215974   59622 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:55:54.216416   59622 main.go:141] libmachine: Using API Version  1
	I0116 23:55:54.216435   59622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:55:54.216816   59622 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:55:54.217016   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetState
	I0116 23:55:54.219327   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .DriverName
	I0116 23:55:54.219556   59622 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.219571   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:55:54.219588   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHHostname
	I0116 23:55:54.222719   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223367   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHPort
	I0116 23:55:54.223154   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:a2:e8", ip: ""} in network mk-old-k8s-version-771669: {Iface:virbr4 ExpiryTime:2024-01-17 00:55:15 +0000 UTC Type:0 Mac:52:54:00:31:a2:e8 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:old-k8s-version-771669 Clientid:01:52:54:00:31:a2:e8}
	I0116 23:55:54.223442   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | domain old-k8s-version-771669 has defined IP address 192.168.72.114 and MAC address 52:54:00:31:a2:e8 in network mk-old-k8s-version-771669
	I0116 23:55:54.223564   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHKeyPath
	I0116 23:55:54.223712   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .GetSSHUsername
	I0116 23:55:54.223850   59622 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/old-k8s-version-771669/id_rsa Username:docker}
	I0116 23:55:54.356173   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:55:54.356192   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:55:54.371191   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:55:54.410651   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:55:54.410679   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:55:54.413826   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:55:54.524186   59622 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.524211   59622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:55:54.553600   59622 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 23:55:54.610636   59622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:55:54.692080   59622 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-771669" context rescaled to 1 replicas
	I0116 23:55:54.692117   59622 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:55:54.694001   59622 out.go:177] * Verifying Kubernetes components...
	I0116 23:55:54.695339   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:55:55.104119   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104142   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104162   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104148   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104471   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104493   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.104504   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.104514   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.104558   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104729   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.104745   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.104748   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105133   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105152   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.105185   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.105199   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.105402   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.105496   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.105518   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.113836   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.113861   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.114230   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.114254   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.114275   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.125955   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.125983   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.125955   59622 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:55:55.126228   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126243   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126267   59622 main.go:141] libmachine: Making call to close driver server
	I0116 23:55:55.126278   59622 main.go:141] libmachine: (old-k8s-version-771669) Calling .Close
	I0116 23:55:55.126579   59622 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:55:55.126599   59622 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:55:55.126609   59622 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-771669"
	I0116 23:55:55.126587   59622 main.go:141] libmachine: (old-k8s-version-771669) DBG | Closing plugin on server side
	I0116 23:55:55.128592   59622 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 23:55:55.129717   59622 addons.go:505] enable addons completed in 997.38021ms: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 23:55:53.987019   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.987081   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.485357   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:54.345875   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:56.347375   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:55.898737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.905488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:57.130634   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:55:59.630394   59622 node_ready.go:58] node "old-k8s-version-771669" has status "Ready":"False"
	I0116 23:56:00.487739   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.985925   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:55:58.845233   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:00.845467   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:03.344488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.130130   59622 node_ready.go:49] node "old-k8s-version-771669" has status "Ready":"True"
	I0116 23:56:02.130152   59622 node_ready.go:38] duration metric: took 7.004088356s waiting for node "old-k8s-version-771669" to be "Ready" ...
	I0116 23:56:02.130160   59622 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.135239   59622 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140322   59622 pod_ready.go:92] pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.140347   59622 pod_ready.go:81] duration metric: took 5.084772ms waiting for pod "coredns-5644d7b6d9-9njqp" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.140358   59622 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144917   59622 pod_ready.go:92] pod "etcd-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.144938   59622 pod_ready.go:81] duration metric: took 4.572247ms waiting for pod "etcd-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.144946   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149588   59622 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.149606   59622 pod_ready.go:81] duration metric: took 4.65461ms waiting for pod "kube-apiserver-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.149614   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153874   59622 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.153891   59622 pod_ready.go:81] duration metric: took 4.272031ms waiting for pod "kube-controller-manager-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.153899   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531721   59622 pod_ready.go:92] pod "kube-proxy-9ghls" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.531742   59622 pod_ready.go:81] duration metric: took 377.837979ms waiting for pod "kube-proxy-9ghls" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.531751   59622 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930934   59622 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace has status "Ready":"True"
	I0116 23:56:02.930957   59622 pod_ready.go:81] duration metric: took 399.199037ms waiting for pod "kube-scheduler-old-k8s-version-771669" in "kube-system" namespace to be "Ready" ...
	I0116 23:56:02.930966   59622 pod_ready.go:38] duration metric: took 800.791409ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:56:02.930982   59622 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:56:02.931031   59622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:56:02.945606   59622 api_server.go:72] duration metric: took 8.253459173s to wait for apiserver process to appear ...
	I0116 23:56:02.945631   59622 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:56:02.945649   59622 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0116 23:56:02.952493   59622 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0116 23:56:02.953510   59622 api_server.go:141] control plane version: v1.16.0
	I0116 23:56:02.953536   59622 api_server.go:131] duration metric: took 7.895148ms to wait for apiserver health ...
	I0116 23:56:02.953545   59622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:56:03.133648   59622 system_pods.go:59] 7 kube-system pods found
	I0116 23:56:03.133673   59622 system_pods.go:61] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.133679   59622 system_pods.go:61] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.133683   59622 system_pods.go:61] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.133688   59622 system_pods.go:61] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.133691   59622 system_pods.go:61] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.133695   59622 system_pods.go:61] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.133698   59622 system_pods.go:61] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.133704   59622 system_pods.go:74] duration metric: took 180.152859ms to wait for pod list to return data ...
	I0116 23:56:03.133710   59622 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:56:03.331291   59622 default_sa.go:45] found service account: "default"
	I0116 23:56:03.331318   59622 default_sa.go:55] duration metric: took 197.601815ms for default service account to be created ...
	I0116 23:56:03.331327   59622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:56:03.535418   59622 system_pods.go:86] 7 kube-system pods found
	I0116 23:56:03.535445   59622 system_pods.go:89] "coredns-5644d7b6d9-9njqp" [a8ca0a69-ad00-45df-939c-881288d37686] Running
	I0116 23:56:03.535450   59622 system_pods.go:89] "etcd-old-k8s-version-771669" [d8c6fafa-5d33-4a3f-9654-9c89e221a0fd] Running
	I0116 23:56:03.535454   59622 system_pods.go:89] "kube-apiserver-old-k8s-version-771669" [81c9faba-99f2-493b-8fab-1d82404e158e] Running
	I0116 23:56:03.535459   59622 system_pods.go:89] "kube-controller-manager-old-k8s-version-771669" [034c4363-3298-44bd-8ceb-b6a6f0af421d] Running
	I0116 23:56:03.535462   59622 system_pods.go:89] "kube-proxy-9ghls" [341db35b-48bc-40ec-81c2-0be006551aa4] Running
	I0116 23:56:03.535466   59622 system_pods.go:89] "kube-scheduler-old-k8s-version-771669" [7c59ec6d-2014-430f-8a8c-ae125e1e3d42] Running
	I0116 23:56:03.535470   59622 system_pods.go:89] "storage-provisioner" [2542a96f-18aa-457e-9cfd-cf4b0aa4448a] Running
	I0116 23:56:03.535476   59622 system_pods.go:126] duration metric: took 204.144185ms to wait for k8s-apps to be running ...
	I0116 23:56:03.535483   59622 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:56:03.535528   59622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:56:03.558457   59622 system_svc.go:56] duration metric: took 22.958568ms WaitForService to wait for kubelet.
	I0116 23:56:03.558483   59622 kubeadm.go:581] duration metric: took 8.866344408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:56:03.558508   59622 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:56:03.731393   59622 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:56:03.731421   59622 node_conditions.go:123] node cpu capacity is 2
	I0116 23:56:03.731429   59622 node_conditions.go:105] duration metric: took 172.916822ms to run NodePressure ...
	I0116 23:56:03.731440   59622 start.go:228] waiting for startup goroutines ...
	I0116 23:56:03.731446   59622 start.go:233] waiting for cluster config update ...
	I0116 23:56:03.731455   59622 start.go:242] writing updated cluster config ...
	I0116 23:56:03.731701   59622 ssh_runner.go:195] Run: rm -f paused
	I0116 23:56:03.779121   59622 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 23:56:03.780832   59622 out.go:177] 
	W0116 23:56:03.782249   59622 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 23:56:03.783563   59622 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 23:56:03.784839   59622 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-771669" cluster and "default" namespace by default
	I0116 23:56:00.398654   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:02.895567   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:04.986421   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:06.987967   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.844145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.844338   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:05.397178   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:07.895626   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.486597   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:11.987301   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:10.345558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.346663   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:09.896758   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:12.397091   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.488021   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.488653   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.844671   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:16.846046   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:14.897098   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:17.396519   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.986905   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.488422   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:18.846198   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.344147   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:19.397728   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:21.896773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.986213   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:25.986326   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:27.987150   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:23.845648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.344054   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:28.344553   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:24.396383   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:26.896341   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.487401   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.986835   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:30.346441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:32.847915   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:29.396831   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:31.397001   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:33.896875   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.486456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.488505   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:34.852382   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:37.347707   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:35.897340   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:38.397188   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.987512   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.487096   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:39.845150   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:40.397474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:42.895926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.985826   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.987077   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:44.344935   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:46.844558   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:45.397742   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:47.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:48.987672   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.488276   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.344755   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:51.844573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:49.902616   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:52.397613   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.989294   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:56.486373   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:53.844691   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:55.844956   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.345033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:54.899462   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:57.396680   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:58.986702   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.485949   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.486250   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:00.347078   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:02.845105   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:56:59.397016   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:01.397815   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:03.898419   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.486385   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.486685   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:05.344293   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:07.345029   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:06.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:08.397358   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.986254   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:11.986807   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:09.845903   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.345589   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:10.896505   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:12.896725   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:13.986990   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.487092   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:14.845336   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:16.845800   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:15.396130   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:17.399737   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:18.986833   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:20.987345   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.486929   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.344648   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.345638   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:19.896048   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:21.897272   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:25.987181   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.488006   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:23.846298   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.345451   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:28.346186   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:24.398032   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:26.896171   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.987497   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:33.485899   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:30.347831   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:32.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:29.398760   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:31.896331   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.486038   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.487296   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:35.344615   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:37.844449   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:34.397051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:36.400079   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:38.896897   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.492372   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.987336   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:39.847519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:42.346252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:41.396236   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.396714   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:43.988240   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:46.486455   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:48.487134   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:44.848036   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.345407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:45.397310   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:47.397378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:50.986902   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.492230   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.845193   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.845627   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:49.397826   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:51.895923   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:53.897342   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:55.986753   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:57.986861   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:54.344373   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.344864   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.345725   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:56.396684   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:57:58.897155   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.486888   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.987550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:00.844347   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:02.846516   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:01.396565   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:03.397374   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:04.990116   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.487567   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.345481   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:07.844570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:05.897023   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:08.396985   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.990087   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.490589   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:09.844815   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:11.845732   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:10.895979   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:12.896502   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.986451   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.986611   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:14.344767   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:16.844872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:15.398203   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:17.399261   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:18.987191   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.487703   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:23.487926   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.347376   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:21.845439   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:19.896972   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:22.397424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:25.987262   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.486174   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.344012   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.347050   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:24.398243   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:26.896557   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.987243   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.988415   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:28.844551   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:30.845899   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:32.846576   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:29.396646   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:31.397556   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:33.896411   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.486850   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.985735   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.344337   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.344473   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:35.896685   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:37.898876   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.986999   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.486890   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:39.345534   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:41.345897   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:40.396241   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:42.396546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.987464   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.988853   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:43.846142   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.343994   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:44.396719   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:46.896228   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.896671   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:49.486803   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:51.491540   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:48.845009   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.847872   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:52.847933   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:50.897309   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.396763   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:53.987492   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:56.486550   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:58.486963   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.346425   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.347346   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:55.397687   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:57.399191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:00.987456   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.486837   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.843983   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.844326   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:58:59.895907   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:01.896151   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.900424   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:05.991223   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.486493   59938 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:03.844751   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.344021   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.344949   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:06.397063   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.895750   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:08.987148   59938 pod_ready.go:81] duration metric: took 4m0.007687151s waiting for pod "metrics-server-57f55c9bc5-xbr22" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:08.987175   59938 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 23:59:08.987182   59938 pod_ready.go:38] duration metric: took 4m1.609147819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:08.987199   59938 api_server.go:52] waiting for apiserver process to appear ...
	I0116 23:59:08.987235   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:08.987285   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:09.035133   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:09.035154   59938 cri.go:89] found id: ""
	I0116 23:59:09.035161   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:09.035211   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.039082   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:09.039138   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:09.085096   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:09.085167   59938 cri.go:89] found id: ""
	I0116 23:59:09.085181   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:09.085246   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.090821   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:09.090893   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:09.127517   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.127548   59938 cri.go:89] found id: ""
	I0116 23:59:09.127558   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:09.127620   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.131643   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:09.131759   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:09.168954   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:09.168979   59938 cri.go:89] found id: ""
	I0116 23:59:09.168988   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:09.169049   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.173389   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:09.173454   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:09.212516   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.212543   59938 cri.go:89] found id: ""
	I0116 23:59:09.212549   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:09.212597   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.216401   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:09.216458   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:09.253140   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.253166   59938 cri.go:89] found id: ""
	I0116 23:59:09.253176   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:09.253235   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.257248   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:09.257315   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:09.296077   59938 cri.go:89] found id: ""
	I0116 23:59:09.296108   59938 logs.go:284] 0 containers: []
	W0116 23:59:09.296119   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:09.296126   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:09.296184   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:09.346212   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:09.346234   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:09.346240   59938 cri.go:89] found id: ""
	I0116 23:59:09.346261   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:09.346320   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.350651   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:09.353960   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:09.353984   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:09.387875   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:09.387900   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:09.428147   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:09.428173   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:09.481107   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:09.481135   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:09.536958   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:09.536994   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:09.550512   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:09.550547   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:09.605837   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:09.605870   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:10.096496   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:10.096548   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:10.134931   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:10.134973   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:10.276791   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:10.276824   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:10.335509   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:10.335544   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:10.395664   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:10.395708   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.431013   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:10.431051   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:12.975358   59938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:59:12.989628   59938 api_server.go:72] duration metric: took 4m12.851755215s to wait for apiserver process to appear ...
	I0116 23:59:12.989650   59938 api_server.go:88] waiting for apiserver healthz status ...
	I0116 23:59:12.989689   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:12.989738   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:13.026039   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.026071   59938 cri.go:89] found id: ""
	I0116 23:59:13.026083   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:13.026138   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.030174   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:13.030236   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:13.067808   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:13.067834   59938 cri.go:89] found id: ""
	I0116 23:59:13.067840   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:13.067888   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.072042   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:13.072118   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:13.111330   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.111351   59938 cri.go:89] found id: ""
	I0116 23:59:13.111359   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:13.111403   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.115095   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:13.115187   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:13.158668   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:13.158691   59938 cri.go:89] found id: ""
	I0116 23:59:13.158699   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:13.158758   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.162836   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:13.162899   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:13.202353   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:13.202372   59938 cri.go:89] found id: ""
	I0116 23:59:13.202379   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:13.202425   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.206475   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:13.206544   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:13.241036   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:13.241069   59938 cri.go:89] found id: ""
	I0116 23:59:13.241080   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:13.241136   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.245245   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:13.245309   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:13.286069   59938 cri.go:89] found id: ""
	I0116 23:59:13.286098   59938 logs.go:284] 0 containers: []
	W0116 23:59:13.286107   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:13.286115   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:13.286178   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:13.324129   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.324148   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.324152   59938 cri.go:89] found id: ""
	I0116 23:59:13.324159   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:13.324201   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.328325   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:13.332030   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:13.332052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:13.345141   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:13.345181   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:13.404778   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:13.404809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:13.441286   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:13.441323   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:13.503668   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:13.503702   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:13.542599   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:13.542631   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:10.347184   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:12.844417   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:10.896545   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.397454   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:13.578579   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:13.578609   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:13.615906   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:13.615934   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:14.022019   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:14.022058   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:14.139776   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:14.139809   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:14.201936   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:14.201970   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:14.240473   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:14.240500   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:14.291008   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:14.291037   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:16.843555   59938 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I0116 23:59:16.849532   59938 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I0116 23:59:16.850519   59938 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 23:59:16.850538   59938 api_server.go:131] duration metric: took 3.860882856s to wait for apiserver health ...
	I0116 23:59:16.850547   59938 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 23:59:16.850568   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 23:59:16.850610   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 23:59:16.900417   59938 cri.go:89] found id: "bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:16.900434   59938 cri.go:89] found id: ""
	I0116 23:59:16.900441   59938 logs.go:284] 1 containers: [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1]
	I0116 23:59:16.900493   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.905495   59938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 23:59:16.905548   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 23:59:16.945387   59938 cri.go:89] found id: "3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:16.945406   59938 cri.go:89] found id: ""
	I0116 23:59:16.945413   59938 logs.go:284] 1 containers: [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d]
	I0116 23:59:16.945463   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.949948   59938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 23:59:16.950016   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 23:59:16.987183   59938 cri.go:89] found id: "77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:16.987202   59938 cri.go:89] found id: ""
	I0116 23:59:16.987209   59938 logs.go:284] 1 containers: [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782]
	I0116 23:59:16.987252   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:16.992140   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 23:59:16.992191   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 23:59:17.029253   59938 cri.go:89] found id: "307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.029275   59938 cri.go:89] found id: ""
	I0116 23:59:17.029282   59938 logs.go:284] 1 containers: [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26]
	I0116 23:59:17.029336   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.033524   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 23:59:17.033609   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 23:59:17.068889   59938 cri.go:89] found id: "beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:17.068913   59938 cri.go:89] found id: ""
	I0116 23:59:17.068932   59938 logs.go:284] 1 containers: [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f]
	I0116 23:59:17.068986   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.072818   59938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 23:59:17.072885   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 23:59:17.111186   59938 cri.go:89] found id: "fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.111207   59938 cri.go:89] found id: ""
	I0116 23:59:17.111216   59938 logs.go:284] 1 containers: [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db]
	I0116 23:59:17.111279   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.115133   59938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 23:59:17.115192   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 23:59:17.150279   59938 cri.go:89] found id: ""
	I0116 23:59:17.150307   59938 logs.go:284] 0 containers: []
	W0116 23:59:17.150316   59938 logs.go:286] No container was found matching "kindnet"
	I0116 23:59:17.150321   59938 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 23:59:17.150401   59938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 23:59:17.192284   59938 cri.go:89] found id: "60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.192321   59938 cri.go:89] found id: "d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.192328   59938 cri.go:89] found id: ""
	I0116 23:59:17.192338   59938 logs.go:284] 2 containers: [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6]
	I0116 23:59:17.192394   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.196472   59938 ssh_runner.go:195] Run: which crictl
	I0116 23:59:17.200243   59938 logs.go:123] Gathering logs for storage-provisioner [d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6] ...
	I0116 23:59:17.200266   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53f5dc02719d71d6e7d78e6b733c8b7479a5eb63cd71fb90897e8f873e643d6"
	I0116 23:59:17.240155   59938 logs.go:123] Gathering logs for dmesg ...
	I0116 23:59:17.240188   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 23:59:17.252553   59938 logs.go:123] Gathering logs for kube-controller-manager [fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db] ...
	I0116 23:59:17.252585   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa4073a76d415ccc501c287e86dcb755929b3c5a1b526c43e75c3aff5845f9db"
	I0116 23:59:17.304688   59938 logs.go:123] Gathering logs for storage-provisioner [60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f] ...
	I0116 23:59:17.304721   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60416d35ab032ff6bfb0a84ac81365707b33f773a37bd6dad6486fecd5b59e0f"
	I0116 23:59:17.346444   59938 logs.go:123] Gathering logs for describe nodes ...
	I0116 23:59:17.346470   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 23:59:17.497208   59938 logs.go:123] Gathering logs for kube-apiserver [bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1] ...
	I0116 23:59:17.497241   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6b71506f3a645e4fc4f5e38fe3eaa9d2e82aaab6178404f01b9f666ae23ee1"
	I0116 23:59:17.561621   59938 logs.go:123] Gathering logs for container status ...
	I0116 23:59:17.561648   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 23:59:17.611648   59938 logs.go:123] Gathering logs for kube-scheduler [307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26] ...
	I0116 23:59:17.611677   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 307723cb0d2c30cb5e31354ffa3fc2c420b5c1d1667e365b2be29f0940ac9c26"
	I0116 23:59:17.646407   59938 logs.go:123] Gathering logs for CRI-O ...
	I0116 23:59:17.646436   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 23:59:17.991476   59938 logs.go:123] Gathering logs for kubelet ...
	I0116 23:59:17.991528   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 23:59:18.053214   59938 logs.go:123] Gathering logs for etcd [3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d] ...
	I0116 23:59:18.053251   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae748115585fd6ba2e4702733c7a644ce92c16c08ed01579f43dfa990084b3d"
	I0116 23:59:18.128011   59938 logs.go:123] Gathering logs for coredns [77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782] ...
	I0116 23:59:18.128049   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f52399b3a56999277e4084d40d034a95bfe2dd245ca010ecbad3c43fca1782"
	I0116 23:59:18.165018   59938 logs.go:123] Gathering logs for kube-proxy [beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f] ...
	I0116 23:59:18.165052   59938 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beec9bf02a1701fdb10dd9281e1886ee764a4f4c48bbef7056f3900897300d3f"
	I0116 23:59:15.345715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.849104   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:15.896059   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:17.890054   60073 pod_ready.go:81] duration metric: took 4m0.00102229s waiting for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:17.890102   60073 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-npd7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:17.890127   60073 pod_ready.go:38] duration metric: took 4m7.665333761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:17.890162   60073 kubeadm.go:640] restartCluster took 4m29.748178484s
	W0116 23:59:17.890247   60073 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:17.890288   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:20.715055   59938 system_pods.go:59] 8 kube-system pods found
	I0116 23:59:20.715096   59938 system_pods.go:61] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.715109   59938 system_pods.go:61] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.715116   59938 system_pods.go:61] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.715123   59938 system_pods.go:61] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.715129   59938 system_pods.go:61] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.715136   59938 system_pods.go:61] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.715146   59938 system_pods.go:61] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.715156   59938 system_pods.go:61] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.715180   59938 system_pods.go:74] duration metric: took 3.864627163s to wait for pod list to return data ...
	I0116 23:59:20.715190   59938 default_sa.go:34] waiting for default service account to be created ...
	I0116 23:59:20.718138   59938 default_sa.go:45] found service account: "default"
	I0116 23:59:20.718165   59938 default_sa.go:55] duration metric: took 2.964863ms for default service account to be created ...
	I0116 23:59:20.718175   59938 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 23:59:20.724393   59938 system_pods.go:86] 8 kube-system pods found
	I0116 23:59:20.724420   59938 system_pods.go:89] "coredns-76f75df574-ptq95" [4b52129d-1f2b-49e8-abeb-b2737a6a6eff] Running
	I0116 23:59:20.724428   59938 system_pods.go:89] "etcd-no-preload-085322" [1c858b7d-b5e1-4cfc-bde1-5dc50105a25a] Running
	I0116 23:59:20.724435   59938 system_pods.go:89] "kube-apiserver-no-preload-085322" [8c7bb2d3-3242-4d66-952c-aecd44147cfa] Running
	I0116 23:59:20.724443   59938 system_pods.go:89] "kube-controller-manager-no-preload-085322" [724238a5-b8ee-4677-9f24-a4138499b99a] Running
	I0116 23:59:20.724449   59938 system_pods.go:89] "kube-proxy-64z5c" [c8f910ca-b577-47f6-a01a-4c7efadd20e4] Running
	I0116 23:59:20.724457   59938 system_pods.go:89] "kube-scheduler-no-preload-085322" [b5c31fae-9116-4c17-a3a8-bd5515a87f04] Running
	I0116 23:59:20.724467   59938 system_pods.go:89] "metrics-server-57f55c9bc5-xbr22" [04d3cffb-ab03-4d0d-8524-333d64531c87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 23:59:20.724479   59938 system_pods.go:89] "storage-provisioner" [60efc797-82b9-4614-8e43-ccf7e2d72911] Running
	I0116 23:59:20.724490   59938 system_pods.go:126] duration metric: took 6.307831ms to wait for k8s-apps to be running ...
	I0116 23:59:20.724503   59938 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 23:59:20.724558   59938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:20.739056   59938 system_svc.go:56] duration metric: took 14.504317ms WaitForService to wait for kubelet.
	I0116 23:59:20.739102   59938 kubeadm.go:581] duration metric: took 4m20.601225794s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 23:59:20.739130   59938 node_conditions.go:102] verifying NodePressure condition ...
	I0116 23:59:20.742521   59938 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 23:59:20.742550   59938 node_conditions.go:123] node cpu capacity is 2
	I0116 23:59:20.742565   59938 node_conditions.go:105] duration metric: took 3.429513ms to run NodePressure ...
	I0116 23:59:20.742581   59938 start.go:228] waiting for startup goroutines ...
	I0116 23:59:20.742594   59938 start.go:233] waiting for cluster config update ...
	I0116 23:59:20.742607   59938 start.go:242] writing updated cluster config ...
	I0116 23:59:20.742897   59938 ssh_runner.go:195] Run: rm -f paused
	I0116 23:59:20.796748   59938 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 23:59:20.799136   59938 out.go:177] * Done! kubectl is now configured to use "no-preload-085322" cluster and "default" namespace by default
	I0116 23:59:20.345640   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:22.845018   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:24.845103   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:26.846579   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:29.345070   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.346027   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:33.346506   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:31.203795   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.313480768s)
	I0116 23:59:31.203876   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:31.217359   60073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:31.228245   60073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:31.238220   60073 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:31.238268   60073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:31.453638   60073 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 23:59:35.845570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:37.845959   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace has status "Ready":"False"
	I0116 23:59:42.067699   60073 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:42.067758   60073 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:42.067846   60073 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:42.067963   60073 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:42.068086   60073 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:42.068177   60073 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:42.069920   60073 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:42.070029   60073 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:42.070134   60073 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:42.070239   60073 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:42.070320   60073 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:42.070461   60073 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:42.070543   60073 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:42.070628   60073 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:42.070700   60073 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:42.070790   60073 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:42.070885   60073 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:42.070932   60073 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:42.070998   60073 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:42.071063   60073 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:42.071135   60073 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:42.071215   60073 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:42.071285   60073 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:42.071387   60073 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:42.071470   60073 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:42.072979   60073 out.go:204]   - Booting up control plane ...
	I0116 23:59:42.073092   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:42.073200   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:42.073276   60073 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:42.073388   60073 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:42.073521   60073 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:42.073576   60073 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:42.073797   60073 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:42.073902   60073 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002800 seconds
	I0116 23:59:42.074028   60073 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 23:59:42.074167   60073 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 23:59:42.074262   60073 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 23:59:42.074513   60073 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-837871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 23:59:42.074590   60073 kubeadm.go:322] [bootstrap-token] Using token: ta3wls.bkzq7grnlnkl7idk
	I0116 23:59:42.076261   60073 out.go:204]   - Configuring RBAC rules ...
	I0116 23:59:42.076394   60073 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 23:59:42.076494   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 23:59:42.076672   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 23:59:42.076836   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 23:59:42.077027   60073 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 23:59:42.077141   60073 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 23:59:42.077286   60073 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 23:59:42.077338   60073 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 23:59:42.077401   60073 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 23:59:42.077420   60073 kubeadm.go:322] 
	I0116 23:59:42.077490   60073 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 23:59:42.077501   60073 kubeadm.go:322] 
	I0116 23:59:42.077590   60073 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 23:59:42.077599   60073 kubeadm.go:322] 
	I0116 23:59:42.077631   60073 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 23:59:42.077704   60073 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 23:59:42.077768   60073 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 23:59:42.077777   60073 kubeadm.go:322] 
	I0116 23:59:42.077841   60073 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 23:59:42.077855   60073 kubeadm.go:322] 
	I0116 23:59:42.077910   60073 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 23:59:42.077918   60073 kubeadm.go:322] 
	I0116 23:59:42.077980   60073 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 23:59:42.078071   60073 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 23:59:42.078167   60073 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 23:59:42.078177   60073 kubeadm.go:322] 
	I0116 23:59:42.078274   60073 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 23:59:42.078382   60073 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 23:59:42.078392   60073 kubeadm.go:322] 
	I0116 23:59:42.078488   60073 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078612   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0116 23:59:42.078642   60073 kubeadm.go:322] 	--control-plane 
	I0116 23:59:42.078651   60073 kubeadm.go:322] 
	I0116 23:59:42.078749   60073 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 23:59:42.078758   60073 kubeadm.go:322] 
	I0116 23:59:42.078854   60073 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ta3wls.bkzq7grnlnkl7idk \
	I0116 23:59:42.078989   60073 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0116 23:59:42.079007   60073 cni.go:84] Creating CNI manager for ""
	I0116 23:59:42.079017   60073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 23:59:42.080763   60073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 23:59:39.838671   60269 pod_ready.go:81] duration metric: took 4m0.001157455s waiting for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" ...
	E0116 23:59:39.838703   60269 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-bkbpm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 23:59:39.838724   60269 pod_ready.go:38] duration metric: took 4m10.089026356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:39.838774   60269 kubeadm.go:640] restartCluster took 4m29.617057242s
	W0116 23:59:39.838852   60269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 23:59:39.838881   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 23:59:42.082183   60073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 23:59:42.116830   60073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 23:59:42.163609   60073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 23:59:42.163699   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.163705   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=embed-certs-837871 minikube.k8s.io/updated_at=2024_01_16T23_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:42.221959   60073 ops.go:34] apiserver oom_adj: -16
	I0116 23:59:42.506451   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.007345   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:43.506584   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.007197   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:44.507002   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.006480   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:45.506954   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.006461   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:46.506833   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.007157   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:47.506780   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.007146   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:48.506504   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:49.006489   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.364253   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.525344336s)
	I0116 23:59:53.364334   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:53.379240   60269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 23:59:53.389562   60269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 23:59:53.400331   60269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 23:59:53.400385   60269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 23:59:53.462116   60269 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 23:59:53.462202   60269 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 23:59:53.624890   60269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 23:59:53.625015   60269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 23:59:53.625132   60269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 23:59:53.877364   60269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 23:59:49.506939   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.007132   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:50.506909   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.006499   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:51.506508   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.006475   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:52.507008   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.007272   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:53.506479   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.007240   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.507034   60073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 23:59:54.651685   60073 kubeadm.go:1088] duration metric: took 12.488048347s to wait for elevateKubeSystemPrivileges.
	I0116 23:59:54.651729   60073 kubeadm.go:406] StartCluster complete in 5m6.561279262s
	I0116 23:59:54.651753   60073 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.651855   60073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:59:54.654608   60073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 23:59:54.654868   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 23:59:54.654894   60073 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 23:59:54.654964   60073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-837871"
	I0116 23:59:54.654980   60073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-837871"
	I0116 23:59:54.655005   60073 addons.go:69] Setting metrics-server=true in profile "embed-certs-837871"
	I0116 23:59:54.655018   60073 addons.go:234] Setting addon metrics-server=true in "embed-certs-837871"
	W0116 23:59:54.655027   60073 addons.go:243] addon metrics-server should already be in state true
	I0116 23:59:54.655090   60073 config.go:182] Loaded profile config "embed-certs-837871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:59:54.655026   60073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-837871"
	I0116 23:59:54.655160   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.654988   60073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-837871"
	W0116 23:59:54.655234   60073 addons.go:243] addon storage-provisioner should already be in state true
	I0116 23:59:54.655271   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.655539   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655568   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655652   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.655613   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.655734   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.672017   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0116 23:59:54.672591   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.673220   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.673241   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.673335   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0116 23:59:54.673863   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0116 23:59:54.673894   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.673865   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674262   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.674430   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674447   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.674491   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.674517   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.674764   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.674932   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.674943   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.675310   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.675465   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.675601   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.675631   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.679148   60073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-837871"
	W0116 23:59:54.679166   60073 addons.go:243] addon default-storageclass should already be in state true
	I0116 23:59:54.679192   60073 host.go:66] Checking if "embed-certs-837871" exists ...
	I0116 23:59:54.679564   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.679582   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.694210   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0116 23:59:54.694711   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.694923   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0116 23:59:54.695308   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.695325   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.695432   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.695724   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.696036   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.696059   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.696124   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.696524   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.697116   60073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:59:54.697142   60073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:59:54.697326   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0116 23:59:54.697741   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.698016   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.700178   60073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 23:59:54.698504   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.701842   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.701911   60073 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:54.701927   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 23:59:54.701945   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.704090   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.704258   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.705992   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.706067   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.707873   60073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 23:59:53.878701   60269 out.go:204]   - Generating certificates and keys ...
	I0116 23:59:53.878801   60269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 23:59:53.878881   60269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 23:59:53.879376   60269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 23:59:53.879833   60269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 23:59:53.880391   60269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 23:59:53.880900   60269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 23:59:53.881422   60269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 23:59:53.881941   60269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 23:59:53.882468   60269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 23:59:53.882982   60269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 23:59:53.883410   60269 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 23:59:53.883502   60269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 23:59:54.118678   60269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 23:59:54.334917   60269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 23:59:54.487424   60269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 23:59:55.124961   60269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 23:59:55.125701   60269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 23:59:55.128156   60269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 23:59:54.706475   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.706576   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.709278   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 23:59:54.709292   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 23:59:54.709305   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.709341   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.709501   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.709672   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.709805   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.712515   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713092   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.713180   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.713283   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.713426   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.713633   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.713742   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.716354   60073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0116 23:59:54.716699   60073 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:59:54.717118   60073 main.go:141] libmachine: Using API Version  1
	I0116 23:59:54.717135   60073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:59:54.717441   60073 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:59:54.717677   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetState
	I0116 23:59:54.719338   60073 main.go:141] libmachine: (embed-certs-837871) Calling .DriverName
	I0116 23:59:54.719591   60073 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:54.719604   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 23:59:54.719619   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHHostname
	I0116 23:59:54.722542   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.722963   60073 main.go:141] libmachine: (embed-certs-837871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:2a:3c", ip: ""} in network mk-embed-certs-837871: {Iface:virbr1 ExpiryTime:2024-01-17 00:54:33 +0000 UTC Type:0 Mac:52:54:00:e9:2a:3c Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:embed-certs-837871 Clientid:01:52:54:00:e9:2a:3c}
	I0116 23:59:54.723002   60073 main.go:141] libmachine: (embed-certs-837871) DBG | domain embed-certs-837871 has defined IP address 192.168.39.226 and MAC address 52:54:00:e9:2a:3c in network mk-embed-certs-837871
	I0116 23:59:54.723112   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHPort
	I0116 23:59:54.723259   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHKeyPath
	I0116 23:59:54.723463   60073 main.go:141] libmachine: (embed-certs-837871) Calling .GetSSHUsername
	I0116 23:59:54.723587   60073 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/embed-certs-837871/id_rsa Username:docker}
	I0116 23:59:54.885431   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 23:59:55.001297   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 23:59:55.001329   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 23:59:55.003513   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 23:59:55.008428   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 23:59:55.068722   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 23:59:55.068751   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 23:59:55.129663   60073 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:55.129686   60073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 23:59:55.161891   60073 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-837871" context rescaled to 1 replicas
	I0116 23:59:55.161935   60073 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 23:59:55.164356   60073 out.go:177] * Verifying Kubernetes components...
	I0116 23:59:55.165822   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:59:55.240612   60073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 23:59:56.696329   60073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.810851137s)
	I0116 23:59:56.696383   60073 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 23:59:56.696338   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.69278648s)
	I0116 23:59:56.696422   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696440   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.696806   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.696868   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.696879   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.696889   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.696898   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.697174   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.697191   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:56.697193   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.729656   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:56.729685   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:56.730006   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:56.730047   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:56.730051   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.196943   60073 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.031082317s)
	I0116 23:59:57.196991   60073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.197171   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.188708335s)
	I0116 23:59:57.197216   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197232   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197556   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197573   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.197590   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.197600   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.197905   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.197908   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.197976   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.211232   60073 node_ready.go:49] node "embed-certs-837871" has status "Ready":"True"
	I0116 23:59:57.211308   60073 node_ready.go:38] duration metric: took 14.304366ms waiting for node "embed-certs-837871" to be "Ready" ...
	I0116 23:59:57.211330   60073 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 23:59:57.230768   60073 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:57.274393   60073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.033730298s)
	I0116 23:59:57.274453   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274471   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.274881   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.274904   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.274915   60073 main.go:141] libmachine: Making call to close driver server
	I0116 23:59:57.274925   60073 main.go:141] libmachine: (embed-certs-837871) Calling .Close
	I0116 23:59:57.275196   60073 main.go:141] libmachine: (embed-certs-837871) DBG | Closing plugin on server side
	I0116 23:59:57.275249   60073 main.go:141] libmachine: Successfully made call to close driver server
	I0116 23:59:57.275273   60073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 23:59:57.275284   60073 addons.go:470] Verifying addon metrics-server=true in "embed-certs-837871"
	I0116 23:59:57.277304   60073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 23:59:55.129817   60269 out.go:204]   - Booting up control plane ...
	I0116 23:59:55.129937   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 23:59:55.130951   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 23:59:55.132943   60269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 23:59:55.149929   60269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 23:59:55.151138   60269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 23:59:55.151234   60269 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 23:59:55.303686   60269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 23:59:57.278953   60073 addons.go:505] enable addons completed in 2.62405803s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 23:59:58.738410   60073 pod_ready.go:92] pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.738434   60073 pod_ready.go:81] duration metric: took 1.507588571s waiting for pod "coredns-5dd5756b68-52xk7" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.738444   60073 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744592   60073 pod_ready.go:92] pod "etcd-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.744617   60073 pod_ready.go:81] duration metric: took 6.165419ms waiting for pod "etcd-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.744626   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750130   60073 pod_ready.go:92] pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.750152   60073 pod_ready.go:81] duration metric: took 5.519057ms waiting for pod "kube-apiserver-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.750164   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755783   60073 pod_ready.go:92] pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.755809   60073 pod_ready.go:81] duration metric: took 5.636904ms waiting for pod "kube-controller-manager-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.755821   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801735   60073 pod_ready.go:92] pod "kube-proxy-n2l6s" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:58.801769   60073 pod_ready.go:81] duration metric: took 45.939564ms waiting for pod "kube-proxy-n2l6s" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:58.801784   60073 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:02.807761   60269 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503615 seconds
	I0117 00:00:02.807943   60269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0117 00:00:02.828242   60269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0117 00:00:03.364977   60269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0117 00:00:03.365242   60269 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-967325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0117 00:00:03.879636   60269 kubeadm.go:322] [bootstrap-token] Using token: y6fuay.d44apxq5qutu9x05
	I0116 23:59:59.202392   60073 pod_ready.go:92] pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace has status "Ready":"True"
	I0116 23:59:59.202420   60073 pod_ready.go:81] duration metric: took 400.626378ms waiting for pod "kube-scheduler-embed-certs-837871" in "kube-system" namespace to be "Ready" ...
	I0116 23:59:59.202435   60073 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:01.211490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.710138   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:03.881170   60269 out.go:204]   - Configuring RBAC rules ...
	I0117 00:00:03.881357   60269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0117 00:00:03.888392   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0117 00:00:03.896580   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0117 00:00:03.900204   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0117 00:00:03.907475   60269 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0117 00:00:03.911613   60269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0117 00:00:03.931171   60269 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0117 00:00:04.171033   60269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0117 00:00:04.300769   60269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0117 00:00:04.300793   60269 kubeadm.go:322] 
	I0117 00:00:04.300911   60269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0117 00:00:04.300944   60269 kubeadm.go:322] 
	I0117 00:00:04.301038   60269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0117 00:00:04.301049   60269 kubeadm.go:322] 
	I0117 00:00:04.301089   60269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0117 00:00:04.301161   60269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0117 00:00:04.301223   60269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0117 00:00:04.301234   60269 kubeadm.go:322] 
	I0117 00:00:04.301302   60269 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0117 00:00:04.301312   60269 kubeadm.go:322] 
	I0117 00:00:04.301373   60269 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0117 00:00:04.301387   60269 kubeadm.go:322] 
	I0117 00:00:04.301445   60269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0117 00:00:04.301545   60269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0117 00:00:04.301645   60269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0117 00:00:04.301656   60269 kubeadm.go:322] 
	I0117 00:00:04.301758   60269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0117 00:00:04.301861   60269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0117 00:00:04.301871   60269 kubeadm.go:322] 
	I0117 00:00:04.301972   60269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302108   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c \
	I0117 00:00:04.302156   60269 kubeadm.go:322] 	--control-plane 
	I0117 00:00:04.302167   60269 kubeadm.go:322] 
	I0117 00:00:04.302261   60269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0117 00:00:04.302272   60269 kubeadm.go:322] 
	I0117 00:00:04.302381   60269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token y6fuay.d44apxq5qutu9x05 \
	I0117 00:00:04.302499   60269 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8e76b7f2c82aa911316558e198a659f462279a6bc4742d325da2fba085ec866c 
	I0117 00:00:04.303423   60269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0117 00:00:04.303460   60269 cni.go:84] Creating CNI manager for ""
	I0117 00:00:04.303481   60269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0117 00:00:04.305311   60269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0117 00:00:04.307124   60269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0117 00:00:04.322172   60269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0117 00:00:04.389195   60269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0117 00:00:04.389280   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.389289   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2 minikube.k8s.io/name=default-k8s-diff-port-967325 minikube.k8s.io/updated_at=2024_01_17T00_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:04.714781   60269 ops.go:34] apiserver oom_adj: -16
	I0117 00:00:04.714929   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.215335   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.715241   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.215729   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:06.715270   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.215562   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:07.716006   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.215883   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:08.715530   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:05.710945   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:08.210490   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:09.215561   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:09.715330   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215559   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.715284   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.215535   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:11.715573   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.215144   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:12.715603   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:13.715595   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:10.215862   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:12.709378   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:14.215373   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:14.715933   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.215536   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:15.715488   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.215344   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.714958   60269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0117 00:00:16.874728   60269 kubeadm.go:1088] duration metric: took 12.485508304s to wait for elevateKubeSystemPrivileges.
	I0117 00:00:16.874771   60269 kubeadm.go:406] StartCluster complete in 5m6.711968782s
	I0117 00:00:16.874796   60269 settings.go:142] acquiring lock: {Name:mkb47a27459fa76d2df71eedb67ccb289850e44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.874888   60269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0117 00:00:16.877055   60269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17975-6238/kubeconfig: {Name:mke2940c6a0083089c536b9f9b1de8133228c014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0117 00:00:16.877357   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0117 00:00:16.877379   60269 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0117 00:00:16.877462   60269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877481   60269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877496   60269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-967325"
	I0117 00:00:16.877517   60269 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877523   60269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-967325"
	W0117 00:00:16.877526   60269 addons.go:243] addon metrics-server should already be in state true
	I0117 00:00:16.877487   60269 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-967325"
	I0117 00:00:16.877580   60269 config.go:182] Loaded profile config "default-k8s-diff-port-967325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0117 00:00:16.877586   60269 addons.go:243] addon storage-provisioner should already be in state true
	I0117 00:00:16.877598   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877641   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.877996   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.878023   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878044   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.877974   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.878110   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.894446   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0117 00:00:16.894710   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0117 00:00:16.894884   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895198   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.895375   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895395   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895731   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.895757   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.895804   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896075   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.896401   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.896436   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.896491   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0117 00:00:16.896763   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.897458   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.898007   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.898028   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.898517   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.899079   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.899106   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.900589   60269 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-967325"
	W0117 00:00:16.900606   60269 addons.go:243] addon default-storageclass should already be in state true
	I0117 00:00:16.900632   60269 host.go:66] Checking if "default-k8s-diff-port-967325" exists ...
	I0117 00:00:16.900945   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.900974   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.917329   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0117 00:00:16.918223   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0117 00:00:16.918283   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918593   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.918787   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.918806   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919109   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.919135   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.919173   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919426   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.919500   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.919712   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.921674   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.923470   60269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0117 00:00:16.922093   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.924865   60269 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:16.924882   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0117 00:00:16.924900   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.926158   60269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0117 00:00:16.927440   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0117 00:00:16.927461   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0117 00:00:16.927490   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.928105   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928672   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.928694   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.928912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.929107   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.929289   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.929432   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.930149   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0117 00:00:16.930552   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.931255   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.931275   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.931335   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931584   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.931606   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.931762   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.931908   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.932042   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.932086   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.932178   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:16.933382   60269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0117 00:00:16.933419   60269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0117 00:00:16.949543   60269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0117 00:00:16.950092   60269 main.go:141] libmachine: () Calling .GetVersion
	I0117 00:00:16.950585   60269 main.go:141] libmachine: Using API Version  1
	I0117 00:00:16.950611   60269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0117 00:00:16.950912   60269 main.go:141] libmachine: () Calling .GetMachineName
	I0117 00:00:16.951212   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetState
	I0117 00:00:16.952912   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .DriverName
	I0117 00:00:16.953207   60269 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:16.953221   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0117 00:00:16.953242   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHHostname
	I0117 00:00:16.955778   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956104   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:00:23", ip: ""} in network mk-default-k8s-diff-port-967325: {Iface:virbr3 ExpiryTime:2024-01-17 00:54:53 +0000 UTC Type:0 Mac:52:54:00:31:00:23 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-967325 Clientid:01:52:54:00:31:00:23}
	I0117 00:00:16.956144   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | domain default-k8s-diff-port-967325 has defined IP address 192.168.61.144 and MAC address 52:54:00:31:00:23 in network mk-default-k8s-diff-port-967325
	I0117 00:00:16.956381   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHPort
	I0117 00:00:16.956659   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHKeyPath
	I0117 00:00:16.956808   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .GetSSHUsername
	I0117 00:00:16.956958   60269 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/default-k8s-diff-port-967325/id_rsa Username:docker}
	I0117 00:00:17.129430   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0117 00:00:17.167358   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0117 00:00:17.198527   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0117 00:00:17.198553   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0117 00:00:17.313705   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0117 00:00:17.313743   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0117 00:00:17.318720   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0117 00:00:17.387945   60269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-967325" context rescaled to 1 replicas
	I0117 00:00:17.387984   60269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0117 00:00:17.391319   60269 out.go:177] * Verifying Kubernetes components...
	I0117 00:00:17.392893   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:00:17.493520   60269 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:17.493544   60269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0117 00:00:17.613989   60269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0117 00:00:14.710779   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:17.209946   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:18.852085   60269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.722614342s)
	I0117 00:00:18.852124   60269 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0117 00:00:19.595960   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.277198121s)
	I0117 00:00:19.595983   60269 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.203057581s)
	I0117 00:00:19.596019   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596022   60269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.596033   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596131   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.428744793s)
	I0117 00:00:19.596164   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596175   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596418   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596437   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596448   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596458   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596544   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596572   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596585   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.596603   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.596616   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.596675   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.596683   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.596697   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.598431   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.598485   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.598507   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.614041   60269 node_ready.go:49] node "default-k8s-diff-port-967325" has status "Ready":"True"
	I0117 00:00:19.614070   60269 node_ready.go:38] duration metric: took 18.033715ms waiting for node "default-k8s-diff-port-967325" to be "Ready" ...
	I0117 00:00:19.614083   60269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:00:19.631026   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.631065   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.631393   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.631412   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.631430   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.643995   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.685268   60269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.071240033s)
	I0117 00:00:19.685313   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685327   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685685   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685706   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685722   60269 main.go:141] libmachine: Making call to close driver server
	I0117 00:00:19.685725   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) DBG | Closing plugin on server side
	I0117 00:00:19.685733   60269 main.go:141] libmachine: (default-k8s-diff-port-967325) Calling .Close
	I0117 00:00:19.685949   60269 main.go:141] libmachine: Successfully made call to close driver server
	I0117 00:00:19.685973   60269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0117 00:00:19.685984   60269 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-967325"
	I0117 00:00:19.688162   60269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0117 00:00:19.690707   60269 addons.go:505] enable addons completed in 2.813327403s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0117 00:00:20.653786   60269 pod_ready.go:92] pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.653817   60269 pod_ready.go:81] duration metric: took 1.009789354s waiting for pod "coredns-5dd5756b68-gtx6b" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.653827   60269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.657327   60269 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657355   60269 pod_ready.go:81] duration metric: took 3.520465ms waiting for pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace to be "Ready" ...
	E0117 00:00:20.657367   60269 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t75qd" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t75qd" not found
	I0117 00:00:20.657375   60269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664327   60269 pod_ready.go:92] pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.664345   60269 pod_ready.go:81] duration metric: took 6.963883ms waiting for pod "etcd-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.664354   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669229   60269 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.669247   60269 pod_ready.go:81] duration metric: took 4.887581ms waiting for pod "kube-apiserver-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.669255   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675553   60269 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:20.675577   60269 pod_ready.go:81] duration metric: took 6.316801ms waiting for pod "kube-controller-manager-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:20.675585   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800600   60269 pod_ready.go:92] pod "kube-proxy-2z6bl" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:21.800632   60269 pod_ready.go:81] duration metric: took 1.125039774s waiting for pod "kube-proxy-2z6bl" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:21.800646   60269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200536   60269 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace has status "Ready":"True"
	I0117 00:00:22.200559   60269 pod_ready.go:81] duration metric: took 399.905665ms waiting for pod "kube-scheduler-default-k8s-diff-port-967325" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:22.200569   60269 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	I0117 00:00:19.212369   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:21.709474   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:23.710530   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:24.210445   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:26.709024   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:28.709454   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:25.710634   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:27.710692   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:30.709571   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.710848   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:29.710867   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:32.209611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:35.208419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:37.708871   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:34.209847   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:36.210863   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:38.211047   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.209274   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711560   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:40.212061   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:42.711598   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.209016   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211322   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:45.211051   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:47.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.709459   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.209458   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:49.711889   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:52.210405   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.710123   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:57.208591   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:54.210670   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:56.711102   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:58.711595   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:00:59.708515   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.710699   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:01.210587   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:03.210938   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:04.207715   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:06.709563   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:05.211825   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:07.709958   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:09.208156   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:11.208879   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:13.708545   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:10.211100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:12.710100   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:16.209033   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:18.209754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:14.710821   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:17.212258   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:20.708444   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.712038   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:19.711436   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:22.210580   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.714772   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:27.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:24.213488   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:26.711404   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.710945   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:32.208179   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:29.211008   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:31.212442   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:33.711966   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:34.208936   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.209612   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.708413   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:36.211118   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:38.214093   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:41.208750   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:43.208812   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:40.710199   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:42.710497   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.708094   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:48.209242   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:45.210899   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:47.214352   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:50.708669   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:52.709880   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:49.709767   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:51.710715   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:53.714522   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:55.209030   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:57.709205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:56.212226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:01:58.715976   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:00.209358   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:02.710521   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:01.210842   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:03.710418   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.208742   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:07.210121   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:05.711354   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:08.211933   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:09.210830   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:11.708402   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:13.710205   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:10.212433   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:12.715928   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:16.207633   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:18.208824   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:15.214546   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:17.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.209380   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.708970   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:20.212349   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:22.711167   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.208762   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.708487   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:25.212601   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:27.710070   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:30.209319   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.708822   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:29.711046   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:32.211983   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:35.207798   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.217291   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:34.710869   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:37.210140   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.707745   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711335   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:39.708871   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:41.711327   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.207582   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.207988   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:48.709297   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:44.211602   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:46.714689   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.208519   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.208808   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:49.212952   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:51.214415   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:53.710355   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.209145   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:57.210556   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:55.716301   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:58.211226   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:02:59.709541   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.208573   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:00.709819   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:02.712699   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.208754   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:06.708448   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:08.709286   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:04.713780   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:07.213872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:10.709570   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:13.208062   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:09.714259   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:12.211448   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:15.209488   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:17.709522   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:14.710693   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:16.711192   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:20.207874   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:22.211189   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:19.210191   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:21.210773   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:23.213975   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:24.708835   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:26.708889   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:25.710691   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:27.711139   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:29.209704   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:31.209811   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:33.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:30.210569   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:32.211539   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:35.708998   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:38.208295   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:34.711729   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:37.210492   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:40.707726   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:42.709246   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:39.211926   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:41.711599   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:43.711794   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:44.710010   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:47.208407   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:46.211285   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:48.212279   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:49.208869   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:51.210676   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:53.708315   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:50.212776   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:52.710665   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:55.709867   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:58.210415   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:54.711312   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:57.210611   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:00.708385   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:03.208916   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210872   60073 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace has status "Ready":"False"
	I0117 00:03:59.210900   60073 pod_ready.go:81] duration metric: took 4m0.008455197s waiting for pod "metrics-server-57f55c9bc5-6rsbl" in "kube-system" namespace to be "Ready" ...
	E0117 00:03:59.210913   60073 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:03:59.210923   60073 pod_ready.go:38] duration metric: took 4m1.999568751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:03:59.210941   60073 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:03:59.210977   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:03:59.211045   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:03:59.268921   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.268947   60073 cri.go:89] found id: ""
	I0117 00:03:59.268956   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:03:59.269005   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.273505   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:03:59.273575   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:03:59.316812   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:03:59.316838   60073 cri.go:89] found id: ""
	I0117 00:03:59.316847   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:03:59.316902   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.321703   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:03:59.321778   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:03:59.365900   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:03:59.365920   60073 cri.go:89] found id: ""
	I0117 00:03:59.365927   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:03:59.365979   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.371077   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:03:59.371148   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:03:59.410379   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:03:59.410405   60073 cri.go:89] found id: ""
	I0117 00:03:59.410415   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:03:59.410475   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.414679   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:03:59.414752   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:03:59.452102   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.452137   60073 cri.go:89] found id: ""
	I0117 00:03:59.452146   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:03:59.452208   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.456735   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:03:59.456805   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:03:59.497070   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:03:59.497097   60073 cri.go:89] found id: ""
	I0117 00:03:59.497105   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:03:59.497172   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.501388   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:03:59.501464   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:03:59.542895   60073 cri.go:89] found id: ""
	I0117 00:03:59.542921   60073 logs.go:284] 0 containers: []
	W0117 00:03:59.542929   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:03:59.542935   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:03:59.542986   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:03:59.579487   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:03:59.579510   60073 cri.go:89] found id: ""
	I0117 00:03:59.579529   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:03:59.579583   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:03:59.583247   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:03:59.583272   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:03:59.682098   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:03:59.682136   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:03:59.811527   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:03:59.811555   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:03:59.858592   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:03:59.858623   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:03:59.896044   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:03:59.896077   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:00.305516   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:00.305553   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:00.346703   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:00.346734   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:00.360638   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:00.360671   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:00.405575   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:00.405607   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:00.443294   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:00.443325   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:00.489541   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:00.489572   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:00.547805   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:00.547835   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.085588   60073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:03.102500   60073 api_server.go:72] duration metric: took 4m7.940532649s to wait for apiserver process to appear ...
	I0117 00:04:03.102525   60073 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:03.102560   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:03.102604   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:03.154743   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.154765   60073 cri.go:89] found id: ""
	I0117 00:04:03.154775   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:03.154837   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.158905   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:03.158964   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:03.199001   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.199026   60073 cri.go:89] found id: ""
	I0117 00:04:03.199035   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:03.199090   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.203757   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:03.203821   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:03.243821   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:03.243853   60073 cri.go:89] found id: ""
	I0117 00:04:03.243862   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:03.243926   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.248835   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:03.248938   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:03.287785   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.287807   60073 cri.go:89] found id: ""
	I0117 00:04:03.287817   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:03.287879   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.291737   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:03.291795   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:03.329647   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.329671   60073 cri.go:89] found id: ""
	I0117 00:04:03.329680   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:03.329740   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.337418   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:03.337513   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:03.375391   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:03.375412   60073 cri.go:89] found id: ""
	I0117 00:04:03.375419   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:03.375468   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.379630   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:03.379697   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:03.418311   60073 cri.go:89] found id: ""
	I0117 00:04:03.418353   60073 logs.go:284] 0 containers: []
	W0117 00:04:03.418366   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:03.418374   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:03.418425   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:03.464391   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.464414   60073 cri.go:89] found id: ""
	I0117 00:04:03.464421   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:03.464465   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:03.469427   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:03.469463   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:03.568016   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:03.568061   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:03.581553   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:03.581578   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:03.628971   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:03.629007   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:03.679732   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:03.679768   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:03.728836   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:03.728875   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:03.771849   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:03.771879   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:03.902777   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:03.902816   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:03.952219   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:03.952255   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:04.003190   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:04.003247   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:05.708428   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:07.708492   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:04.067058   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:04.067090   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:04.446812   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:04.446869   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:07.005449   60073 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I0117 00:04:07.011401   60073 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I0117 00:04:07.012696   60073 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:07.012723   60073 api_server.go:131] duration metric: took 3.910192448s to wait for apiserver health ...
	I0117 00:04:07.012732   60073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:07.012758   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:07.012804   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:07.052667   60073 cri.go:89] found id: "d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:07.052699   60073 cri.go:89] found id: ""
	I0117 00:04:07.052708   60073 logs.go:284] 1 containers: [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699]
	I0117 00:04:07.052769   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.057415   60073 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:07.057482   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:07.096347   60073 cri.go:89] found id: "c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.096374   60073 cri.go:89] found id: ""
	I0117 00:04:07.096383   60073 logs.go:284] 1 containers: [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d]
	I0117 00:04:07.096445   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.100499   60073 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:07.100598   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:07.145539   60073 cri.go:89] found id: "fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:07.145561   60073 cri.go:89] found id: ""
	I0117 00:04:07.145567   60073 logs.go:284] 1 containers: [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743]
	I0117 00:04:07.145625   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.149880   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:07.149936   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:07.188723   60073 cri.go:89] found id: "724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:07.188751   60073 cri.go:89] found id: ""
	I0117 00:04:07.188760   60073 logs.go:284] 1 containers: [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3]
	I0117 00:04:07.188822   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.193191   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:07.193259   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:07.236787   60073 cri.go:89] found id: "85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.236811   60073 cri.go:89] found id: ""
	I0117 00:04:07.236820   60073 logs.go:284] 1 containers: [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd]
	I0117 00:04:07.236876   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.241167   60073 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:07.241219   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:07.279432   60073 cri.go:89] found id: "caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.279453   60073 cri.go:89] found id: ""
	I0117 00:04:07.279462   60073 logs.go:284] 1 containers: [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44]
	I0117 00:04:07.279527   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.283548   60073 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:07.283618   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:07.319879   60073 cri.go:89] found id: ""
	I0117 00:04:07.319912   60073 logs.go:284] 0 containers: []
	W0117 00:04:07.319922   60073 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:07.319930   60073 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:07.319992   60073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:07.356138   60073 cri.go:89] found id: "304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.356162   60073 cri.go:89] found id: ""
	I0117 00:04:07.356170   60073 logs.go:284] 1 containers: [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0]
	I0117 00:04:07.356219   60073 ssh_runner.go:195] Run: which crictl
	I0117 00:04:07.360310   60073 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:07.360339   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:07.457151   60073 logs.go:123] Gathering logs for etcd [c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d] ...
	I0117 00:04:07.457197   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4895b3e5cab3bcfeae61adea3da9187098ec2315adde2feed8eea49269a0d3d"
	I0117 00:04:07.501163   60073 logs.go:123] Gathering logs for kube-proxy [85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd] ...
	I0117 00:04:07.501207   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a871eaadf5255dccb79a4d0fc1ebd635b13964759415c5d025d86cf9c610dd"
	I0117 00:04:07.544248   60073 logs.go:123] Gathering logs for kube-controller-manager [caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44] ...
	I0117 00:04:07.544279   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caa2304d7d208bade9e12896b595a8dab84503b62b227e4067b6517b18721e44"
	I0117 00:04:07.593284   60073 logs.go:123] Gathering logs for storage-provisioner [304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0] ...
	I0117 00:04:07.593321   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 304b75257b98a046e0119fb0badeb60609b5377dadf58cbf74ef4681deb9b5f0"
	I0117 00:04:07.635978   60073 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:07.636016   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:07.950451   60073 logs.go:123] Gathering logs for container status ...
	I0117 00:04:07.950489   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:08.003046   60073 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:08.003089   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:08.017299   60073 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:08.017341   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:08.152348   60073 logs.go:123] Gathering logs for kube-apiserver [d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699] ...
	I0117 00:04:08.152401   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d76dfa44d72e3ecf88afae9b852a6f43e0b747e971504a4cb98392d4ff4c6699"
	I0117 00:04:08.213047   60073 logs.go:123] Gathering logs for coredns [fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743] ...
	I0117 00:04:08.213084   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf799dc2641ea380bae4abb97d2ead396b894bac5ff8a880743c639b7898743"
	I0117 00:04:08.249860   60073 logs.go:123] Gathering logs for kube-scheduler [724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3] ...
	I0117 00:04:08.249897   60073 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724ffd940ff0311cc6fb549b4206371dcae35ebd12c5c1fae5788b834305ddf3"
	I0117 00:04:10.813629   60073 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:10.813656   60073 system_pods.go:61] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.813670   60073 system_pods.go:61] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.813676   60073 system_pods.go:61] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.813681   60073 system_pods.go:61] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.813685   60073 system_pods.go:61] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.813689   60073 system_pods.go:61] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.813695   60073 system_pods.go:61] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.813699   60073 system_pods.go:61] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.813707   60073 system_pods.go:74] duration metric: took 3.800969531s to wait for pod list to return data ...
	I0117 00:04:10.813714   60073 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:10.816640   60073 default_sa.go:45] found service account: "default"
	I0117 00:04:10.816662   60073 default_sa.go:55] duration metric: took 2.941561ms for default service account to be created ...
	I0117 00:04:10.816669   60073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:10.823246   60073 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:10.823270   60073 system_pods.go:89] "coredns-5dd5756b68-52xk7" [b4fac6c4-b902-4f0f-9999-b212b64c94ec] Running
	I0117 00:04:10.823274   60073 system_pods.go:89] "etcd-embed-certs-837871" [2c35357e-6370-4561-a37f-32185d725801] Running
	I0117 00:04:10.823279   60073 system_pods.go:89] "kube-apiserver-embed-certs-837871" [6e479f91-e5c6-440b-9741-36c3781b4b3d] Running
	I0117 00:04:10.823283   60073 system_pods.go:89] "kube-controller-manager-embed-certs-837871" [8129e936-6533-4be4-8f65-b90a4a75cf28] Running
	I0117 00:04:10.823287   60073 system_pods.go:89] "kube-proxy-n2l6s" [85153ef8-2cfa-4fce-82a5-b66e94c2f400] Running
	I0117 00:04:10.823291   60073 system_pods.go:89] "kube-scheduler-embed-certs-837871" [77ea33e3-f163-4a1b-a225-e0fe2ea6ebb0] Running
	I0117 00:04:10.823297   60073 system_pods.go:89] "metrics-server-57f55c9bc5-6rsbl" [c3af6965-7851-4a08-8c60-78fefb523e9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:10.823302   60073 system_pods.go:89] "storage-provisioner" [892c3a03-f9c9-46de-967a-6d2b9ea5c7f8] Running
	I0117 00:04:10.823309   60073 system_pods.go:126] duration metric: took 6.635452ms to wait for k8s-apps to be running ...
	I0117 00:04:10.823316   60073 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:10.823358   60073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:10.840725   60073 system_svc.go:56] duration metric: took 17.401272ms WaitForService to wait for kubelet.
	I0117 00:04:10.840756   60073 kubeadm.go:581] duration metric: took 4m15.678792469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:10.840782   60073 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:10.843904   60073 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:10.843926   60073 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:10.843938   60073 node_conditions.go:105] duration metric: took 3.150197ms to run NodePressure ...
	I0117 00:04:10.843949   60073 start.go:228] waiting for startup goroutines ...
	I0117 00:04:10.843954   60073 start.go:233] waiting for cluster config update ...
	I0117 00:04:10.843963   60073 start.go:242] writing updated cluster config ...
	I0117 00:04:10.844214   60073 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:10.894554   60073 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:10.896971   60073 out.go:177] * Done! kubectl is now configured to use "embed-certs-837871" cluster and "default" namespace by default
	I0117 00:04:10.209252   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:12.707441   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:14.707981   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:17.208289   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:19.708419   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:21.708960   60269 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace has status "Ready":"False"
	I0117 00:04:22.208465   60269 pod_ready.go:81] duration metric: took 4m0.007885269s waiting for pod "metrics-server-57f55c9bc5-dqkll" in "kube-system" namespace to be "Ready" ...
	E0117 00:04:22.208486   60269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0117 00:04:22.208494   60269 pod_ready.go:38] duration metric: took 4m2.594399816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0117 00:04:22.208508   60269 api_server.go:52] waiting for apiserver process to appear ...
	I0117 00:04:22.208558   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:22.208608   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:22.258977   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.259005   60269 cri.go:89] found id: ""
	I0117 00:04:22.259013   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:22.259116   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.264067   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:22.264126   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:22.302361   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:22.302396   60269 cri.go:89] found id: ""
	I0117 00:04:22.302407   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:22.302471   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.306898   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:22.306956   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:22.347083   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.347110   60269 cri.go:89] found id: ""
	I0117 00:04:22.347119   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:22.347177   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.352368   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:22.352441   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:22.392093   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:22.392121   60269 cri.go:89] found id: ""
	I0117 00:04:22.392131   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:22.392264   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.397726   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:22.397791   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:22.434242   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:22.434265   60269 cri.go:89] found id: ""
	I0117 00:04:22.434275   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:22.434342   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.438904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:22.438969   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:22.474797   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.474818   60269 cri.go:89] found id: ""
	I0117 00:04:22.474828   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:22.474874   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.478956   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:22.479020   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:22.517049   60269 cri.go:89] found id: ""
	I0117 00:04:22.517078   60269 logs.go:284] 0 containers: []
	W0117 00:04:22.517089   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:22.517096   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:22.517160   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:22.566393   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:22.566419   60269 cri.go:89] found id: ""
	I0117 00:04:22.566428   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:22.566486   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:22.572179   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:22.572206   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:22.624440   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:22.624471   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:22.666603   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:22.666629   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:22.734797   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:22.734829   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:22.827906   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:22.827941   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:22.842239   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:22.842269   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:22.990196   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:22.990226   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:23.048894   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:23.048933   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:23.093309   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:23.093340   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:23.135374   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:23.135400   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:23.172339   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:23.172366   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:23.567228   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:23.567266   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:26.111237   60269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0117 00:04:26.127331   60269 api_server.go:72] duration metric: took 4m8.739316517s to wait for apiserver process to appear ...
	I0117 00:04:26.127358   60269 api_server.go:88] waiting for apiserver healthz status ...
	I0117 00:04:26.127403   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:26.127465   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:26.164726   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:26.164752   60269 cri.go:89] found id: ""
	I0117 00:04:26.164763   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:26.164824   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.168448   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:26.168500   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:26.205643   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:26.205673   60269 cri.go:89] found id: ""
	I0117 00:04:26.205682   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:26.205742   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.209923   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:26.209982   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:26.247432   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:26.247456   60269 cri.go:89] found id: ""
	I0117 00:04:26.247463   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:26.247514   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.251904   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:26.252009   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:26.292943   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.292971   60269 cri.go:89] found id: ""
	I0117 00:04:26.292980   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:26.293038   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.298224   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:26.298307   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:26.338299   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:26.338322   60269 cri.go:89] found id: ""
	I0117 00:04:26.338331   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:26.338398   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.342452   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:26.342520   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:26.384665   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.384693   60269 cri.go:89] found id: ""
	I0117 00:04:26.384702   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:26.384761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.389556   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:26.389629   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:26.427717   60269 cri.go:89] found id: ""
	I0117 00:04:26.427748   60269 logs.go:284] 0 containers: []
	W0117 00:04:26.427758   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:26.427766   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:26.427825   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:26.467435   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.467463   60269 cri.go:89] found id: ""
	I0117 00:04:26.467471   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:26.467529   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:26.471617   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:26.471641   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:26.514185   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:26.514216   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:26.569408   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:26.569440   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:26.610011   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:26.610040   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:26.976249   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:26.976286   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:27.019812   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:27.019855   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:27.064258   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:27.064285   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:27.104147   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:27.104181   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:27.157665   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:27.157695   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:27.255786   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:27.255824   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:27.269460   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:27.269497   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:27.420255   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:27.420288   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.008636   60269 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0117 00:04:30.014467   60269 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0117 00:04:30.015693   60269 api_server.go:141] control plane version: v1.28.4
	I0117 00:04:30.015716   60269 api_server.go:131] duration metric: took 3.888351113s to wait for apiserver health ...
	I0117 00:04:30.015724   60269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0117 00:04:30.015745   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0117 00:04:30.015789   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0117 00:04:30.055587   60269 cri.go:89] found id: "44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.055608   60269 cri.go:89] found id: ""
	I0117 00:04:30.055626   60269 logs.go:284] 1 containers: [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae]
	I0117 00:04:30.055677   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.060043   60269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0117 00:04:30.060108   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0117 00:04:30.102912   60269 cri.go:89] found id: "1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:30.102938   60269 cri.go:89] found id: ""
	I0117 00:04:30.102946   60269 logs.go:284] 1 containers: [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea]
	I0117 00:04:30.102995   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.107429   60269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0117 00:04:30.107490   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0117 00:04:30.149238   60269 cri.go:89] found id: "d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.149259   60269 cri.go:89] found id: ""
	I0117 00:04:30.149266   60269 logs.go:284] 1 containers: [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868]
	I0117 00:04:30.149318   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.154207   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0117 00:04:30.154276   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0117 00:04:30.195972   60269 cri.go:89] found id: "40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.195998   60269 cri.go:89] found id: ""
	I0117 00:04:30.196008   60269 logs.go:284] 1 containers: [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373]
	I0117 00:04:30.196067   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.200515   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0117 00:04:30.200593   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0117 00:04:30.242656   60269 cri.go:89] found id: "a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.242686   60269 cri.go:89] found id: ""
	I0117 00:04:30.242696   60269 logs.go:284] 1 containers: [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542]
	I0117 00:04:30.242761   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.247430   60269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0117 00:04:30.247488   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0117 00:04:30.285008   60269 cri.go:89] found id: "c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.285036   60269 cri.go:89] found id: ""
	I0117 00:04:30.285045   60269 logs.go:284] 1 containers: [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d]
	I0117 00:04:30.285123   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.292254   60269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0117 00:04:30.292325   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0117 00:04:30.329856   60269 cri.go:89] found id: ""
	I0117 00:04:30.329884   60269 logs.go:284] 0 containers: []
	W0117 00:04:30.329895   60269 logs.go:286] No container was found matching "kindnet"
	I0117 00:04:30.329902   60269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0117 00:04:30.329962   60269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0117 00:04:30.370003   60269 cri.go:89] found id: "284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.370026   60269 cri.go:89] found id: ""
	I0117 00:04:30.370033   60269 logs.go:284] 1 containers: [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837]
	I0117 00:04:30.370081   60269 ssh_runner.go:195] Run: which crictl
	I0117 00:04:30.374869   60269 logs.go:123] Gathering logs for dmesg ...
	I0117 00:04:30.374896   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0117 00:04:30.388524   60269 logs.go:123] Gathering logs for describe nodes ...
	I0117 00:04:30.388564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0117 00:04:30.520901   60269 logs.go:123] Gathering logs for kube-apiserver [44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae] ...
	I0117 00:04:30.520935   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44c04220b559e5da8bfaafc371f0705958851505d86888c19c6f0068dbf475ae"
	I0117 00:04:30.568977   60269 logs.go:123] Gathering logs for coredns [d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868] ...
	I0117 00:04:30.569016   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d54e67f6cfd4e81eada1215000e3375d2fde120e6ec48cb4a6c7a95f1fa81868"
	I0117 00:04:30.604580   60269 logs.go:123] Gathering logs for kube-proxy [a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542] ...
	I0117 00:04:30.604620   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7769a6a67bd2f04af2ec4cfd42ab565d2e789e65c28bc8633b69a042fab4542"
	I0117 00:04:30.642634   60269 logs.go:123] Gathering logs for kube-controller-manager [c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d] ...
	I0117 00:04:30.642668   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c733c24fe4cac900df4972693d96ba59cfa28925ce83aab0d91650b73f48940d"
	I0117 00:04:30.692005   60269 logs.go:123] Gathering logs for container status ...
	I0117 00:04:30.692048   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0117 00:04:30.745471   60269 logs.go:123] Gathering logs for kubelet ...
	I0117 00:04:30.745532   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0117 00:04:30.842886   60269 logs.go:123] Gathering logs for kube-scheduler [40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373] ...
	I0117 00:04:30.842926   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40ee2a17afa04a9ac81c949d98c4951e05e22facd6c16c934e87ca26ccfd1373"
	I0117 00:04:30.891850   60269 logs.go:123] Gathering logs for storage-provisioner [284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837] ...
	I0117 00:04:30.891882   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 284632eb250da1ef8688b3b26c62f0de2681f17e9f098c58681f52af1774b837"
	I0117 00:04:30.929266   60269 logs.go:123] Gathering logs for CRI-O ...
	I0117 00:04:30.929295   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0117 00:04:31.236511   60269 logs.go:123] Gathering logs for etcd [1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea] ...
	I0117 00:04:31.236564   60269 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc993cc983def6501e7e3113aeaf3d4bfed441d6150631fbee9ce91f2f712ea"
	I0117 00:04:33.783706   60269 system_pods.go:59] 8 kube-system pods found
	I0117 00:04:33.783732   60269 system_pods.go:61] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.783737   60269 system_pods.go:61] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.783742   60269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.783746   60269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.783750   60269 system_pods.go:61] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.783754   60269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.783760   60269 system_pods.go:61] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.783764   60269 system_pods.go:61] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.783772   60269 system_pods.go:74] duration metric: took 3.768043559s to wait for pod list to return data ...
	I0117 00:04:33.783780   60269 default_sa.go:34] waiting for default service account to be created ...
	I0117 00:04:33.786490   60269 default_sa.go:45] found service account: "default"
	I0117 00:04:33.786515   60269 default_sa.go:55] duration metric: took 2.725972ms for default service account to be created ...
	I0117 00:04:33.786525   60269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0117 00:04:33.793345   60269 system_pods.go:86] 8 kube-system pods found
	I0117 00:04:33.793372   60269 system_pods.go:89] "coredns-5dd5756b68-gtx6b" [492a64a7-b9b2-4254-a59c-26feeabeb822] Running
	I0117 00:04:33.793377   60269 system_pods.go:89] "etcd-default-k8s-diff-port-967325" [46b7ad5d-ddd1-4a98-b733-d508da3dae30] Running
	I0117 00:04:33.793382   60269 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-967325" [fd512faf-44c2-491d-9d8e-9c4fff18ac12] Running
	I0117 00:04:33.793388   60269 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-967325" [5fb33a25-ec71-4386-8036-dc9f98db1c92] Running
	I0117 00:04:33.793392   60269 system_pods.go:89] "kube-proxy-2z6bl" [230eb872-e4ee-4bc3-b7c4-bb3fa0ba9580] Running
	I0117 00:04:33.793396   60269 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-967325" [07615af2-2df0-4657-b800-46b85c9d2787] Running
	I0117 00:04:33.793404   60269 system_pods.go:89] "metrics-server-57f55c9bc5-dqkll" [7120ca9d-d404-47b7-90d9-3e2609c8b60b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0117 00:04:33.793410   60269 system_pods.go:89] "storage-provisioner" [ca1859fa-3d3d-42e3-8e25-bc7ad078338e] Running
	I0117 00:04:33.793417   60269 system_pods.go:126] duration metric: took 6.886472ms to wait for k8s-apps to be running ...
	I0117 00:04:33.793427   60269 system_svc.go:44] waiting for kubelet service to be running ....
	I0117 00:04:33.793470   60269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0117 00:04:33.809147   60269 system_svc.go:56] duration metric: took 15.709692ms WaitForService to wait for kubelet.
	I0117 00:04:33.809197   60269 kubeadm.go:581] duration metric: took 4m16.421187944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0117 00:04:33.809225   60269 node_conditions.go:102] verifying NodePressure condition ...
	I0117 00:04:33.813251   60269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0117 00:04:33.813289   60269 node_conditions.go:123] node cpu capacity is 2
	I0117 00:04:33.813315   60269 node_conditions.go:105] duration metric: took 4.084961ms to run NodePressure ...
	I0117 00:04:33.813339   60269 start.go:228] waiting for startup goroutines ...
	I0117 00:04:33.813349   60269 start.go:233] waiting for cluster config update ...
	I0117 00:04:33.813362   60269 start.go:242] writing updated cluster config ...
	I0117 00:04:33.813716   60269 ssh_runner.go:195] Run: rm -f paused
	I0117 00:04:33.866136   60269 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0117 00:04:33.868353   60269 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-967325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:14 UTC. --
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.530406692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450454530391784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eff0776b-593e-459f-a5b4-b37d78b896b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.530954540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=abd3ed71-c279-4f9b-95d9-19b6a216a4c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.531078356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=abd3ed71-c279-4f9b-95d9-19b6a216a4c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.531298508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=abd3ed71-c279-4f9b-95d9-19b6a216a4c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.566867451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8fbc02ac-5174-4e1a-ab29-99e3dba02887 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.566955294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8fbc02ac-5174-4e1a-ab29-99e3dba02887 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.568081123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=31af9ea7-939f-4f34-bfb7-d3af4e9a8fda name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.568437283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450454568425377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=31af9ea7-939f-4f34-bfb7-d3af4e9a8fda name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.568893218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3d7a9b7-6852-4e2f-8a34-88435e68bf22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.568938377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3d7a9b7-6852-4e2f-8a34-88435e68bf22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.569198970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3d7a9b7-6852-4e2f-8a34-88435e68bf22 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.604522982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0c212e38-126d-45e1-94fb-502b9f522125 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.604603827Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0c212e38-126d-45e1-94fb-502b9f522125 name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.606195923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=928aad2a-24d2-42b9-9f68-d66d24f17fc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.606552064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450454606538532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=928aad2a-24d2-42b9-9f68-d66d24f17fc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.607437174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9922cd24-1eb6-4b88-b7b6-fdcea3b0cb77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.607500135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9922cd24-1eb6-4b88-b7b6-fdcea3b0cb77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.607713633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9922cd24-1eb6-4b88-b7b6-fdcea3b0cb77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.646703475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b4234280-25b3-437d-b82a-a91222a8777c name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.646773194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b4234280-25b3-437d-b82a-a91222a8777c name=/runtime.v1.RuntimeService/Version
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.651303427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3d63f588-6b02-410f-a089-70a24646fe3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.651696688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705450454651683243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3d63f588-6b02-410f-a089-70a24646fe3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.652468768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=286f2739-2b56-439f-bee6-c7e8d32cfad9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.652560613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=286f2739-2b56-439f-bee6-c7e8d32cfad9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 17 00:14:14 old-k8s-version-771669 crio[714]: time="2024-01-17 00:14:14.652751133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9459eba4162bea5dc8c769586b31ccf9f45d9201ed153c40f8b80ae4d475cbaa,PodSandboxId:69a4cbb576850bc6c4be62e1d21727afd30ad296053abdd06386f1252d6d10c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705449358183063121,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c,},Annotations:map[string]string{io.kubernetes.container.hash: f50bc468,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942,PodSandboxId:861a780833a2deb5c59915629158588de868e05103601f238c2e9c2a913ba562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705449355780954681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-9njqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca0a69-ad00-45df-939c-881288d37686,},Annotations:map[string]string{io.kubernetes.container.hash: 7165149f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3,PodSandboxId:51a17462d718a909631cc53b017cd0d141bc3d8918b05bc918bbf92f73b2e7e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705449354464103153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 2542a96f-18aa-457e-9cfd-cf4b0aa4448a,},Annotations:map[string]string{io.kubernetes.container.hash: 24a39315,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7,PodSandboxId:9e58ca8a29986daf112eb9ad73847ce9df13f9e57d297bd80eab9390ee8c28e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705449353722688306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ghls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341db35b-48bc-40ec-81c2
-0be006551aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 4198dc3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174,PodSandboxId:453bb94b5ee72b4bb0228a472fb64051b3129beb77cc00e686042ce19173722d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705449346371159697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb9d38ddb35e9570be774e84fd982c8,},Annotations:map[string]string{io.ku
bernetes.container.hash: de0f8d80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877,PodSandboxId:5f2e4e8fdc5640306dacd9926fa83c1c77ef5f364e49d1b2a98ea8bf2afab860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705449345182077564,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1aa56c39f6b2c2acc2fab8651fa71a7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 934af826,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f,PodSandboxId:e3d35b7aba356b3769654ebda697bc1b531c1310fd81dfea2e7e853bd2fa0785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705449345041084989,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d,PodSandboxId:13d26353ba2d4f3990fa2d76225931251e677dbc6c13d73d5cab2cf7fd0fca4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705449344936598606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-771669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io
.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=286f2739-2b56-439f-bee6-c7e8d32cfad9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9459eba4162be       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   69a4cbb576850       busybox
	21a6dceb568ad       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   861a780833a2d       coredns-5644d7b6d9-9njqp
	5cbd938949134       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   51a17462d718a       storage-provisioner
	a613a4e4ddfe3       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   9e58ca8a29986       kube-proxy-9ghls
	7a937abd3b903       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   453bb94b5ee72       etcd-old-k8s-version-771669
	f4999acc2d6d7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   5f2e4e8fdc564       kube-apiserver-old-k8s-version-771669
	911f813160b15       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   e3d35b7aba356       kube-controller-manager-old-k8s-version-771669
	494f74041efd3       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   13d26353ba2d4       kube-scheduler-old-k8s-version-771669
	
	
	==> coredns [21a6dceb568ad8a661f1746de82435dda3825cece7a3edbcf84a3bdc8d9a2942] <==
	E0116 23:46:10.187359       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.193152       1 trace.go:82] Trace[785493325]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.186709268 +0000 UTC m=+0.081907198) (total time: 30.006404152s):
	Trace[785493325]: [30.006404152s] [30.006404152s] END
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.193190       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0116 23:46:10.200490       1 trace.go:82] Trace[1301817211]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-01-16 23:45:40.19394028 +0000 UTC m=+0.089138224) (total time: 30.006532947s):
	Trace[1301817211]: [30.006532947s] [30.006532947s] END
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0116 23:46:10.200551       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-16T23:46:15.289Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2024-01-16T23:46:15.321Z [INFO] 127.0.0.1:57441 - 44193 "HINFO IN 1365412375578555759.7322076794870044211. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008071628s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-16T23:55:55.993Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-16T23:55:55.993Z [INFO] CoreDNS-1.6.2
	2024-01-16T23:55:55.993Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T23:55:56.003Z [INFO] 127.0.0.1:59166 - 17216 "HINFO IN 9081841845838306910.8543492278547947642. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009686681s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-771669
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-771669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d44f2747221f24f9b150997f249dc925fca3b3e2
	                    minikube.k8s.io/name=old-k8s-version-771669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T23_45_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 23:45:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jan 2024 00:13:22 +0000   Tue, 16 Jan 2024 23:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    old-k8s-version-771669
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0599c334d1574c44852cd606008f4484
	 System UUID:                0599c334-d157-4c44-852c-d606008f4484
	 Boot ID:                    6a822f71-f4d9-4098-87a2-3d00d7bd6120
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-9njqp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-771669                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-771669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-771669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-9ghls                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-771669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-gj4zn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-771669     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x7 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet, old-k8s-version-771669     Node old-k8s-version-771669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-771669     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-771669  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074468] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.864255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569582] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135010] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.485542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.831981] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.125426] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.166674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.156891] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.236650] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +18.743957] systemd-fstab-generator[1024]: Ignoring "noauto" for root device
	[  +0.411438] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan16 23:56] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [7a937abd3b903e3b3077c167550f27bcd1f1d4f5aab680120e329fb350780174] <==
	2024-01-16 23:55:46.463616 I | etcdserver: restarting member d80e54998a205cf3 in cluster fe5d4cbbe2066f7 at commit index 527
	2024-01-16 23:55:46.463912 I | raft: d80e54998a205cf3 became follower at term 2
	2024-01-16 23:55:46.463954 I | raft: newRaft d80e54998a205cf3 [peers: [], term: 2, commit: 527, applied: 0, lastindex: 527, lastterm: 2]
	2024-01-16 23:55:46.471794 W | auth: simple token is not cryptographically signed
	2024-01-16 23:55:46.474478 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 23:55:46.476050 I | etcdserver/membership: added member d80e54998a205cf3 [https://192.168.72.114:2380] to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:46.476228 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 23:55:46.476294 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 23:55:46.476369 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 23:55:46.476491 I | embed: listening for metrics on http://192.168.72.114:2381
	2024-01-16 23:55:46.477296 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 23:55:48.264496 I | raft: d80e54998a205cf3 is starting a new election at term 2
	2024-01-16 23:55:48.264548 I | raft: d80e54998a205cf3 became candidate at term 3
	2024-01-16 23:55:48.264567 I | raft: d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.264578 I | raft: d80e54998a205cf3 became leader at term 3
	2024-01-16 23:55:48.264584 I | raft: raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3
	2024-01-16 23:55:48.266381 I | etcdserver: published {Name:old-k8s-version-771669 ClientURLs:[https://192.168.72.114:2379]} to cluster fe5d4cbbe2066f7
	2024-01-16 23:55:48.266872 I | embed: ready to serve client requests
	2024-01-16 23:55:48.267138 I | embed: ready to serve client requests
	2024-01-16 23:55:48.268857 I | embed: serving client requests on 192.168.72.114:2379
	2024-01-16 23:55:48.272176 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-17 00:05:48.299555 I | mvcc: store.index: compact 831
	2024-01-17 00:05:48.301444 I | mvcc: finished scheduled compaction at 831 (took 1.48289ms)
	2024-01-17 00:10:48.307018 I | mvcc: store.index: compact 1049
	2024-01-17 00:10:48.309423 I | mvcc: finished scheduled compaction at 1049 (took 1.556943ms)
	
	
	==> kernel <==
	 00:14:14 up 19 min,  0 users,  load average: 0.18, 0.15, 0.10
	Linux old-k8s-version-771669 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f4999acc2d6d71044a8ba84adb7506232830c9a62b4582419f4c84a83f22f877] <==
	I0117 00:06:52.568125       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:06:52.568324       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:06:52.568428       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:06:52.568460       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:08:52.568751       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:08:52.569130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:08:52.569216       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:08:52.569239       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:10:52.570364       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:10:52.570659       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:10:52.570748       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:10:52.570771       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:11:52.571161       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:11:52.571452       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:11:52.571520       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:11:52.571559       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0117 00:13:52.571887       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0117 00:13:52.572257       1 handler_proxy.go:99] no RequestInfo found in the context
	E0117 00:13:52.572431       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0117 00:13:52.572517       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [911f813160b15e950cd7732aae4c9659902270f743ebd2da4832802edbcb614f] <==
	E0117 00:07:44.159270       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:07:54.492892       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:14.411306       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:26.495091       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:08:44.663350       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:08:58.497544       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:14.915110       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:09:30.499632       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:09:45.167228       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:02.502151       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:15.419628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:10:34.504463       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:10:45.671634       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:06.506665       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:15.924066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:11:38.508658       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:11:46.176241       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:10.510374       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:16.428278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:12:42.512853       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:12:46.680268       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:14.515039       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:16.932502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0117 00:13:46.516781       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0117 00:13:47.184739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [a613a4e4ddfe3f4050fb4411ca506114705c7b0d4df46ffc0b7d170fd8e1d6a7] <==
	W0116 23:45:41.007361       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:45:41.016329       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:45:41.016352       1 server_others.go:149] Using iptables Proxier.
	I0116 23:45:41.016667       1 server.go:529] Version: v1.16.0
	I0116 23:45:41.018410       1 config.go:131] Starting endpoints config controller
	I0116 23:45:41.024018       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:45:41.018730       1 config.go:313] Starting service config controller
	I0116 23:45:41.024397       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:45:41.124802       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:45:41.125007       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0116 23:55:53.969591       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 23:55:53.981521       1 node.go:135] Successfully retrieved node IP: 192.168.72.114
	I0116 23:55:53.981589       1 server_others.go:149] Using iptables Proxier.
	I0116 23:55:53.982391       1 server.go:529] Version: v1.16.0
	I0116 23:55:53.983881       1 config.go:313] Starting service config controller
	I0116 23:55:53.983929       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 23:55:53.984039       1 config.go:131] Starting endpoints config controller
	I0116 23:55:53.984056       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 23:55:54.084183       1 shared_informer.go:204] Caches are synced for service config 
	I0116 23:55:54.084427       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [494f74041efd369d75556b4605655b109a66353e45733295db503b57f9fe851d] <==
	E0116 23:45:19.290133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 23:45:19.293479       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 23:45:19.294843       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 23:45:19.296276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 23:45:19.297284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 23:45:19.302219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 23:45:19.306970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 23:45:19.307150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.307930       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:45:19.308102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0116 23:55:45.888159       1 serving.go:319] Generated self-signed cert in-memory
	W0116 23:55:51.429069       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 23:55:51.429295       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 23:55:51.429326       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 23:55:51.429407       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 23:55:51.479301       1 server.go:143] Version: v1.16.0
	I0116 23:55:51.479424       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0116 23:55:51.496560       1 authorization.go:47] Authorization is disabled
	W0116 23:55:51.496594       1 authentication.go:79] Authentication is disabled
	I0116 23:55:51.496610       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 23:55:51.497402       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 23:55:51.544869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 23:55:51.545090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 23:55:51.545242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 23:55:14 UTC, ends at Wed 2024-01-17 00:14:15 UTC. --
	Jan 17 00:09:41 old-k8s-version-771669 kubelet[1030]: E0117 00:09:41.444444    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:09:54 old-k8s-version-771669 kubelet[1030]: E0117 00:09:54.444474    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:06 old-k8s-version-771669 kubelet[1030]: E0117 00:10:06.444489    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:19 old-k8s-version-771669 kubelet[1030]: E0117 00:10:19.444124    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:32 old-k8s-version-771669 kubelet[1030]: E0117 00:10:32.444520    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:43 old-k8s-version-771669 kubelet[1030]: E0117 00:10:43.517317    1030 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 17 00:10:45 old-k8s-version-771669 kubelet[1030]: E0117 00:10:45.444419    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:10:57 old-k8s-version-771669 kubelet[1030]: E0117 00:10:57.446558    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:12 old-k8s-version-771669 kubelet[1030]: E0117 00:11:12.444116    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:23 old-k8s-version-771669 kubelet[1030]: E0117 00:11:23.449289    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:38 old-k8s-version-771669 kubelet[1030]: E0117 00:11:38.444251    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455296    1030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455380    1030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455430    1030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 17 00:11:52 old-k8s-version-771669 kubelet[1030]: E0117 00:11:52.455461    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 17 00:12:03 old-k8s-version-771669 kubelet[1030]: E0117 00:12:03.444940    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:17 old-k8s-version-771669 kubelet[1030]: E0117 00:12:17.445881    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:30 old-k8s-version-771669 kubelet[1030]: E0117 00:12:30.445107    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:42 old-k8s-version-771669 kubelet[1030]: E0117 00:12:42.444153    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:12:53 old-k8s-version-771669 kubelet[1030]: E0117 00:12:53.445269    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:07 old-k8s-version-771669 kubelet[1030]: E0117 00:13:07.444539    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:21 old-k8s-version-771669 kubelet[1030]: E0117 00:13:21.444749    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:34 old-k8s-version-771669 kubelet[1030]: E0117 00:13:34.444779    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:13:47 old-k8s-version-771669 kubelet[1030]: E0117 00:13:47.445167    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 17 00:14:01 old-k8s-version-771669 kubelet[1030]: E0117 00:14:01.444446    1030 pod_workers.go:191] Error syncing pod 8b24c979-032d-4e7e-a0f6-082f680542e6 ("metrics-server-74d5856cc6-gj4zn_kube-system(8b24c979-032d-4e7e-a0f6-082f680542e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [5cbd938949134baee796ae711e12c53c8ce47a7370eea4b846249e6b9a1c93b3] <==
	I0116 23:45:41.784762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:45:41.799195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:45:41.799369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:45:41.808193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:45:41.809025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:45:41.810922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4 became leader
	I0116 23:45:41.909835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_904cea1a-b29f-4d17-80e7-b423158d6ff4!
	I0116 23:55:55.015814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 23:55:55.084172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 23:55:55.084535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 23:56:12.492253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 23:56:12.492881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	I0116 23:56:12.493615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"758bc903-948e-4786-bcf0-959877c69c8e", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0 became leader
	I0116 23:56:12.593934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-771669_3d5abd08-9917-4fef-aeb2-b69dff41edb0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-771669 -n old-k8s-version-771669
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-771669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-gj4zn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1 (67.080691ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-gj4zn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-771669 describe pod metrics-server-74d5856cc6-gj4zn: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.48s)

                                                
                                    

Test pass (250/312)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.08
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 17.37
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.18
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 17.6
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.57
31 TestOffline 65.94
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 209.07
38 TestAddons/parallel/Registry 24.16
40 TestAddons/parallel/InspektorGadget 17.15
41 TestAddons/parallel/MetricsServer 5.93
42 TestAddons/parallel/HelmTiller 12.47
44 TestAddons/parallel/CSI 63.14
45 TestAddons/parallel/Headlamp 14.95
46 TestAddons/parallel/CloudSpanner 7.16
47 TestAddons/parallel/LocalPath 28.4
48 TestAddons/parallel/NvidiaDevicePlugin 5.75
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 81.89
55 TestCertExpiration 339.91
57 TestForceSystemdFlag 85.1
58 TestForceSystemdEnv 83.77
60 TestKVMDriverInstallOrUpdate 5.31
64 TestErrorSpam/setup 48.32
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.6
68 TestErrorSpam/unpause 1.64
69 TestErrorSpam/stop 2.26
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 97.44
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 38.44
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
81 TestFunctional/serial/CacheCmd/cache/add_local 2.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 37.61
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.42
92 TestFunctional/serial/LogsFileCmd 1.42
93 TestFunctional/serial/InvalidService 4.91
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 14.09
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 1.36
103 TestFunctional/parallel/ServiceCmdConnect 12.66
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 43.49
107 TestFunctional/parallel/SSHCmd 0.56
108 TestFunctional/parallel/CpCmd 1.5
109 TestFunctional/parallel/MySQL 28.85
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 1.68
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
119 TestFunctional/parallel/License 0.47
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
130 TestFunctional/parallel/Version/short 0.07
131 TestFunctional/parallel/Version/components 0.79
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.36
136 TestFunctional/parallel/ImageCommands/ImageBuild 5.48
137 TestFunctional/parallel/ImageCommands/Setup 1.97
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.92
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.5
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.01
141 TestFunctional/parallel/ServiceCmd/List 0.26
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
144 TestFunctional/parallel/ServiceCmd/Format 0.33
145 TestFunctional/parallel/ServiceCmd/URL 0.34
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
150 TestFunctional/parallel/ProfileCmd/profile_list 0.39
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
152 TestFunctional/parallel/MountCmd/any-port 24.66
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.32
154 TestFunctional/parallel/ImageCommands/ImageRemove 1.63
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.41
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.42
157 TestFunctional/parallel/MountCmd/specific-port 1.69
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 82.49
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.4
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
172 TestJSONOutput/start/Command 95.47
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.67
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.6
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.12
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.22
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 93.7
204 TestMountStart/serial/StartWithMountFirst 26.47
205 TestMountStart/serial/VerifyMountFirst 0.39
206 TestMountStart/serial/StartWithMountSecond 26.91
207 TestMountStart/serial/VerifyMountSecond 0.39
208 TestMountStart/serial/DeleteFirst 0.89
209 TestMountStart/serial/VerifyMountPostDelete 0.4
210 TestMountStart/serial/Stop 1.09
211 TestMountStart/serial/RestartStopped 22.86
212 TestMountStart/serial/VerifyMountPostStop 0.42
215 TestMultiNode/serial/FreshStart2Nodes 160.15
216 TestMultiNode/serial/DeployApp2Nodes 5.68
217 TestMultiNode/serial/PingHostFrom2Pods 0.9
218 TestMultiNode/serial/AddNode 43.55
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.58
222 TestMultiNode/serial/StopNode 2.26
223 TestMultiNode/serial/StartAfterStop 29.52
225 TestMultiNode/serial/DeleteNode 1.55
227 TestMultiNode/serial/RestartMultiNode 440.19
228 TestMultiNode/serial/ValidateNameConflict 47.19
235 TestScheduledStopUnix 116.15
239 TestRunningBinaryUpgrade 222.56
241 TestKubernetesUpgrade 178.7
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
248 TestNoKubernetes/serial/StartWithK8s 74.51
253 TestNetworkPlugins/group/false 3.51
257 TestNoKubernetes/serial/StartWithStopK8s 66.06
258 TestNoKubernetes/serial/Start 46.9
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
260 TestNoKubernetes/serial/ProfileList 31.87
261 TestNoKubernetes/serial/Stop 2.58
262 TestNoKubernetes/serial/StartNoArgs 21.03
271 TestPause/serial/Start 116.8
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
273 TestStoppedBinaryUpgrade/Setup 2.19
274 TestStoppedBinaryUpgrade/Upgrade 157.08
275 TestPause/serial/SecondStartNoReconfiguration 45.29
276 TestPause/serial/Pause 2.4
277 TestPause/serial/VerifyStatus 0.27
278 TestPause/serial/Unpause 1.12
279 TestPause/serial/PauseAgain 0.95
280 TestPause/serial/DeletePaused 1.04
281 TestPause/serial/VerifyDeletedResources 0.46
282 TestNetworkPlugins/group/auto/Start 105.75
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
284 TestNetworkPlugins/group/kindnet/Start 81.88
285 TestNetworkPlugins/group/calico/Start 132.42
286 TestNetworkPlugins/group/custom-flannel/Start 122.3
287 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
288 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
289 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
290 TestNetworkPlugins/group/auto/KubeletFlags 0.28
291 TestNetworkPlugins/group/auto/NetCatPod 16.31
292 TestNetworkPlugins/group/kindnet/DNS 0.22
293 TestNetworkPlugins/group/kindnet/Localhost 0.19
294 TestNetworkPlugins/group/kindnet/HairPin 0.19
295 TestNetworkPlugins/group/auto/DNS 0.25
296 TestNetworkPlugins/group/auto/Localhost 0.25
297 TestNetworkPlugins/group/auto/HairPin 0.23
298 TestNetworkPlugins/group/enable-default-cni/Start 100.08
299 TestNetworkPlugins/group/flannel/Start 107.24
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.25
302 TestNetworkPlugins/group/calico/NetCatPod 16.26
303 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
304 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
305 TestNetworkPlugins/group/calico/DNS 0.24
306 TestNetworkPlugins/group/calico/Localhost 0.19
307 TestNetworkPlugins/group/calico/HairPin 0.27
308 TestNetworkPlugins/group/custom-flannel/DNS 0.2
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
311 TestNetworkPlugins/group/bridge/Start 67.83
313 TestStartStop/group/old-k8s-version/serial/FirstStart 149.78
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
319 TestNetworkPlugins/group/flannel/ControllerPod 6.01
320 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
321 TestNetworkPlugins/group/flannel/NetCatPod 15.45
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
323 TestNetworkPlugins/group/bridge/NetCatPod 13.28
325 TestStartStop/group/no-preload/serial/FirstStart 122.51
326 TestNetworkPlugins/group/flannel/DNS 0.22
327 TestNetworkPlugins/group/flannel/Localhost 0.16
328 TestNetworkPlugins/group/flannel/HairPin 0.17
329 TestNetworkPlugins/group/bridge/DNS 0.24
330 TestNetworkPlugins/group/bridge/Localhost 0.25
331 TestNetworkPlugins/group/bridge/HairPin 0.21
333 TestStartStop/group/embed-certs/serial/FirstStart 110.91
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 133.26
336 TestStartStop/group/old-k8s-version/serial/DeployApp 11.5
337 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
339 TestStartStop/group/no-preload/serial/DeployApp 9.3
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
342 TestStartStop/group/embed-certs/serial/DeployApp 9.32
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
349 TestStartStop/group/old-k8s-version/serial/SecondStart 398.55
352 TestStartStop/group/no-preload/serial/SecondStart 562.61
353 TestStartStop/group/embed-certs/serial/SecondStart 842.18
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 835.47
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
367 TestStartStop/group/newest-cni/serial/FirstStart 59.32
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.32
370 TestStartStop/group/newest-cni/serial/Stop 3.12
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
372 TestStartStop/group/newest-cni/serial/SecondStart 46.52
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/newest-cni/serial/Pause 2.46
x
+
TestDownloadOnly/v1.16.0/json-events (24.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-892925 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-892925 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.084605301s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-892925
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-892925: exit status 85 (71.618078ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |          |
	|         | -p download-only-892925        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 22:36:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 22:36:16.884981   14942 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:36:16.885091   14942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:16.885101   14942 out.go:309] Setting ErrFile to fd 2...
	I0116 22:36:16.885105   14942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:16.885288   14942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	W0116 22:36:16.885411   14942 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17975-6238/.minikube/config/config.json: open /home/jenkins/minikube-integration/17975-6238/.minikube/config/config.json: no such file or directory
	I0116 22:36:16.885967   14942 out.go:303] Setting JSON to true
	I0116 22:36:16.886919   14942 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1123,"bootTime":1705443454,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:36:16.886984   14942 start.go:138] virtualization: kvm guest
	I0116 22:36:16.889442   14942 out.go:97] [download-only-892925] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:36:16.891102   14942 out.go:169] MINIKUBE_LOCATION=17975
	W0116 22:36:16.889539   14942 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 22:36:16.889573   14942 notify.go:220] Checking for updates...
	I0116 22:36:16.893987   14942 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:36:16.895295   14942 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:36:16.896660   14942 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:36:16.897915   14942 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 22:36:16.900096   14942 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 22:36:16.900321   14942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:36:16.995114   14942 out.go:97] Using the kvm2 driver based on user configuration
	I0116 22:36:16.995152   14942 start.go:298] selected driver: kvm2
	I0116 22:36:16.995160   14942 start.go:902] validating driver "kvm2" against <nil>
	I0116 22:36:16.995490   14942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:16.995633   14942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 22:36:17.009897   14942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 22:36:17.009978   14942 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 22:36:17.010491   14942 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 22:36:17.010668   14942 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 22:36:17.010719   14942 cni.go:84] Creating CNI manager for ""
	I0116 22:36:17.010731   14942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:36:17.010741   14942 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 22:36:17.010749   14942 start_flags.go:321] config:
	{Name:download-only-892925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-892925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:36:17.010946   14942 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:17.013007   14942 out.go:97] Downloading VM boot image ...
	I0116 22:36:17.013038   14942 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17975-6238/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 22:36:25.133387   14942 out.go:97] Starting control plane node download-only-892925 in cluster download-only-892925
	I0116 22:36:25.133411   14942 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 22:36:25.227853   14942 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0116 22:36:25.227906   14942 cache.go:56] Caching tarball of preloaded images
	I0116 22:36:25.228075   14942 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 22:36:25.229934   14942 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 22:36:25.229955   14942 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:36:25.331118   14942 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-892925"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-892925
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (17.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-404581 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-404581 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.373866223s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (17.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-404581
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-404581: exit status 85 (178.556773ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |                     |
	|         | -p download-only-892925        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| delete  | -p download-only-892925        | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| start   | -o=json --download-only        | download-only-404581 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |                     |
	|         | -p download-only-404581        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 22:36:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 22:36:41.326089   15139 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:36:41.326316   15139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:41.326324   15139 out.go:309] Setting ErrFile to fd 2...
	I0116 22:36:41.326328   15139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:41.326540   15139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:36:41.327088   15139 out.go:303] Setting JSON to true
	I0116 22:36:41.327842   15139 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1148,"bootTime":1705443454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:36:41.327896   15139 start.go:138] virtualization: kvm guest
	I0116 22:36:41.330101   15139 out.go:97] [download-only-404581] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:36:41.331682   15139 out.go:169] MINIKUBE_LOCATION=17975
	I0116 22:36:41.330236   15139 notify.go:220] Checking for updates...
	I0116 22:36:41.334296   15139 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:36:41.335609   15139 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:36:41.336779   15139 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:36:41.337970   15139 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 22:36:41.340641   15139 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 22:36:41.340872   15139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:36:41.372434   15139 out.go:97] Using the kvm2 driver based on user configuration
	I0116 22:36:41.372461   15139 start.go:298] selected driver: kvm2
	I0116 22:36:41.372466   15139 start.go:902] validating driver "kvm2" against <nil>
	I0116 22:36:41.372770   15139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:41.372836   15139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 22:36:41.387406   15139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 22:36:41.387461   15139 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 22:36:41.388091   15139 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 22:36:41.388298   15139 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 22:36:41.388380   15139 cni.go:84] Creating CNI manager for ""
	I0116 22:36:41.388397   15139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:36:41.388409   15139 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 22:36:41.388420   15139 start_flags.go:321] config:
	{Name:download-only-404581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-404581 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:36:41.388636   15139 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:41.390757   15139 out.go:97] Starting control plane node download-only-404581 in cluster download-only-404581
	I0116 22:36:41.390772   15139 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 22:36:41.489097   15139 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 22:36:41.489121   15139 cache.go:56] Caching tarball of preloaded images
	I0116 22:36:41.489240   15139 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 22:36:41.491153   15139 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 22:36:41.491166   15139 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:36:41.592471   15139 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-404581"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-404581
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (17.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-106740 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-106740 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.596079736s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (17.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-106740
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-106740: exit status 85 (72.52266ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |                     |
	|         | -p download-only-892925           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| delete  | -p download-only-892925           | download-only-892925 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| start   | -o=json --download-only           | download-only-404581 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |                     |
	|         | -p download-only-404581           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| delete  | -p download-only-404581           | download-only-404581 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC | 16 Jan 24 22:36 UTC |
	| start   | -o=json --download-only           | download-only-106740 | jenkins | v1.32.0 | 16 Jan 24 22:36 UTC |                     |
	|         | -p download-only-106740           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 22:36:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 22:36:59.150058   15327 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:36:59.150302   15327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:59.150312   15327 out.go:309] Setting ErrFile to fd 2...
	I0116 22:36:59.150317   15327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:36:59.150533   15327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:36:59.151145   15327 out.go:303] Setting JSON to true
	I0116 22:36:59.151940   15327 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1165,"bootTime":1705443454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:36:59.152001   15327 start.go:138] virtualization: kvm guest
	I0116 22:36:59.154470   15327 out.go:97] [download-only-106740] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:36:59.156063   15327 out.go:169] MINIKUBE_LOCATION=17975
	I0116 22:36:59.154682   15327 notify.go:220] Checking for updates...
	I0116 22:36:59.159228   15327 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:36:59.160839   15327 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:36:59.162482   15327 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:36:59.164282   15327 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 22:36:59.167243   15327 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 22:36:59.167471   15327 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:36:59.200003   15327 out.go:97] Using the kvm2 driver based on user configuration
	I0116 22:36:59.200034   15327 start.go:298] selected driver: kvm2
	I0116 22:36:59.200039   15327 start.go:902] validating driver "kvm2" against <nil>
	I0116 22:36:59.200357   15327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:59.200437   15327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17975-6238/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 22:36:59.214595   15327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 22:36:59.214675   15327 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 22:36:59.215147   15327 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 22:36:59.215312   15327 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 22:36:59.215366   15327 cni.go:84] Creating CNI manager for ""
	I0116 22:36:59.215378   15327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 22:36:59.215387   15327 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 22:36:59.215400   15327 start_flags.go:321] config:
	{Name:download-only-106740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-106740 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:36:59.215553   15327 iso.go:125] acquiring lock: {Name:mk6863dbf0a498ca798c4a04316f450234400017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 22:36:59.217491   15327 out.go:97] Starting control plane node download-only-106740 in cluster download-only-106740
	I0116 22:36:59.217510   15327 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 22:36:59.312481   15327 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 22:36:59.312503   15327 cache.go:56] Caching tarball of preloaded images
	I0116 22:36:59.312685   15327 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 22:36:59.314796   15327 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 22:36:59.314822   15327 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0116 22:36:59.417429   15327 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/17975-6238/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-106740"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-106740
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-992131 --alsologtostderr --binary-mirror http://127.0.0.1:46169 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-992131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-992131
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (65.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-689316 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-689316 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.692011749s)
helpers_test.go:175: Cleaning up "offline-crio-689316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-689316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-689316: (1.249336518s)
--- PASS: TestOffline (65.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-033244
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-033244: exit status 85 (59.41174ms)

                                                
                                                
-- stdout --
	* Profile "addons-033244" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033244"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-033244
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-033244: exit status 85 (61.625655ms)

                                                
                                                
-- stdout --
	* Profile "addons-033244" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033244"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-033244 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-033244 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m29.069249911s)
--- PASS: TestAddons/Setup (209.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (24.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 26.134486ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-b9qhk" [9a6a8c0d-3a15-42ec-8b4e-a34e508c3590] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006165685s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lzc5j" [3aa946b0-5483-4ad8-82c0-c41ab2daa594] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005561866s
addons_test.go:340: (dbg) Run:  kubectl --context addons-033244 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-033244 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-033244 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.232126912s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 ip
2024/01/16 22:41:10 [DEBUG] GET http://192.168.39.234:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (24.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (17.15s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kst8p" [82c2f0c8-76b2-4366-81cb-66a8c91dd4ae] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005216784s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-033244
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-033244: (11.14184229s)
--- PASS: TestAddons/parallel/InspektorGadget (17.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 26.050227ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-khssf" [2ccf4b20-f4de-4a17-8529-1399f0552a28] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008066166s
addons_test.go:415: (dbg) Run:  kubectl --context addons-033244 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 26.25014ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-29mdf" [a3510868-5d9a-481c-a603-e4f068e40e0b] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008998791s
addons_test.go:473: (dbg) Run:  kubectl --context addons-033244 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-033244 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.739696447s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.031583ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-033244 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-033244 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7c46d610-7220-49df-add4-8417a6cd753b] Pending
helpers_test.go:344: "task-pv-pod" [7c46d610-7220-49df-add4-8417a6cd753b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7c46d610-7220-49df-add4-8417a6cd753b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.015325517s
addons_test.go:584: (dbg) Run:  kubectl --context addons-033244 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-033244 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-033244 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-033244 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-033244 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-033244 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-033244 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2003ef18-0066-4bbe-9cbf-b20ea00389b3] Pending
helpers_test.go:344: "task-pv-pod-restore" [2003ef18-0066-4bbe-9cbf-b20ea00389b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2003ef18-0066-4bbe-9cbf-b20ea00389b3] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.005591805s
addons_test.go:626: (dbg) Run:  kubectl --context addons-033244 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-033244 delete pod task-pv-pod-restore: (1.274289164s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-033244 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-033244 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-033244 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.857306541s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-033244 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-033244 --alsologtostderr -v=1: (1.936104932s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-r2dlj" [68006346-d91a-4daf-bd72-77f14555bdd0] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-r2dlj" [68006346-d91a-4daf-bd72-77f14555bdd0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-r2dlj" [68006346-d91a-4daf-bd72-77f14555bdd0] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.010536161s
--- PASS: TestAddons/parallel/Headlamp (14.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-2h4xz" [c4bcd4e1-71ca-4c8b-9698-799cc1fdf4b6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.072192367s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-033244
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-033244: (1.07823979s)
--- PASS: TestAddons/parallel/CloudSpanner (7.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (28.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-033244 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-033244 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [97091a08-e1b3-48be-ba64-47b038e38d8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [97091a08-e1b3-48be-ba64-47b038e38d8b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [97091a08-e1b3-48be-ba64-47b038e38d8b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.004021356s
addons_test.go:891: (dbg) Run:  kubectl --context addons-033244 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 ssh "cat /opt/local-path-provisioner/pvc-65aa8f6a-073e-4e60-ba0a-da47faceff6d_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-033244 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-033244 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-033244 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (28.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-44vfc" [9c7e0da1-cb2d-4e04-bc03-be4506fab2af] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005664085s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-033244
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-sdmtr" [ec6dcf21-656d-4cc6-a676-1966a5ebb1f5] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004187783s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-033244 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-033244 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (81.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-714920 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-714920 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m20.344280732s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-714920 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-714920 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-714920 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-714920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-714920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-714920: (1.026466769s)
--- PASS: TestCertOptions (81.89s)

                                                
                                    
x
+
TestCertExpiration (339.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-997317 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0116 23:36:00.968152   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-997317 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m22.132762393s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-997317 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-997317 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m16.711452517s)
helpers_test.go:175: Cleaning up "cert-expiration-997317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-997317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-997317: (1.06234648s)
--- PASS: TestCertExpiration (339.91s)

                                                
                                    
x
+
TestForceSystemdFlag (85.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-463786 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-463786 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.877155268s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-463786 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-463786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-463786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-463786: (1.014575563s)
--- PASS: TestForceSystemdFlag (85.10s)

                                                
                                    
x
+
TestForceSystemdEnv (83.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-305396 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-305396 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.988571716s)
helpers_test.go:175: Cleaning up "force-systemd-env-305396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-305396
--- PASS: TestForceSystemdEnv (83.77s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.31s)

                                                
                                    
x
+
TestErrorSpam/setup (48.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-320906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-320906 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-320906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-320906 --driver=kvm2  --container-runtime=crio: (48.319553063s)
--- PASS: TestErrorSpam/setup (48.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 stop: (2.092170322s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-320906 --log_dir /tmp/nospam-320906 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17975-6238/.minikube/files/etc/test/nested/copy/14930/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-949292 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.442174424s)
--- PASS: TestFunctional/serial/StartWithProxy (97.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-949292 --alsologtostderr -v=8: (38.435203763s)
functional_test.go:659: soft start took 38.435821633s for "functional-949292" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-949292 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:3.1: (1.175140157s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:3.3: (1.181366265s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 cache add registry.k8s.io/pause:latest: (1.135388699s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-949292 /tmp/TestFunctionalserialCacheCmdcacheadd_local2524958278/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache add minikube-local-cache-test:functional-949292
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 cache add minikube-local-cache-test:functional-949292: (1.772777603s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache delete minikube-local-cache-test:functional-949292
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-949292
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (242.993839ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 kubectl -- --context functional-949292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-949292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0116 22:50:47.136009   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.141869   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.152136   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.172390   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.212684   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.293014   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.453343   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:47.773891   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:48.414814   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:49.694973   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 22:50:52.255692   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-949292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.607914037s)
functional_test.go:757: restart took 37.608034162s for "functional-949292" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-949292 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 logs: (1.420652662s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 logs --file /tmp/TestFunctionalserialLogsFileCmd3894018126/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 logs --file /tmp/TestFunctionalserialLogsFileCmd3894018126/001/logs.txt: (1.419274489s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-949292 apply -f testdata/invalidsvc.yaml
E0116 22:50:57.376256   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-949292
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-949292: exit status 115 (305.516608ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.78:31066 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-949292 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-949292 delete -f testdata/invalidsvc.yaml: (1.403013396s)
--- PASS: TestFunctional/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 config get cpus: exit status 14 (73.378566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 config get cpus: exit status 14 (57.407077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-949292 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-949292 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23092: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-949292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.665351ms)

                                                
                                                
-- stdout --
	* [functional-949292] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 22:51:16.867452   22544 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:51:16.867632   22544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:16.867644   22544 out.go:309] Setting ErrFile to fd 2...
	I0116 22:51:16.867648   22544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:16.867849   22544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:51:16.868360   22544 out.go:303] Setting JSON to false
	I0116 22:51:16.869244   22544 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2023,"bootTime":1705443454,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:51:16.869304   22544 start.go:138] virtualization: kvm guest
	I0116 22:51:16.871760   22544 out.go:177] * [functional-949292] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 22:51:16.873240   22544 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 22:51:16.873289   22544 notify.go:220] Checking for updates...
	I0116 22:51:16.874644   22544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:51:16.876101   22544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:51:16.877638   22544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:51:16.879102   22544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 22:51:16.880403   22544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 22:51:16.882015   22544 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 22:51:16.882424   22544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:51:16.882483   22544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:51:16.897144   22544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0116 22:51:16.897562   22544 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:51:16.898102   22544 main.go:141] libmachine: Using API Version  1
	I0116 22:51:16.898130   22544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:51:16.898495   22544 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:51:16.898750   22544 main.go:141] libmachine: (functional-949292) Calling .DriverName
	I0116 22:51:16.899020   22544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:51:16.899306   22544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:51:16.899337   22544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:51:16.914212   22544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0116 22:51:16.914592   22544 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:51:16.915013   22544 main.go:141] libmachine: Using API Version  1
	I0116 22:51:16.915041   22544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:51:16.915323   22544 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:51:16.915514   22544 main.go:141] libmachine: (functional-949292) Calling .DriverName
	I0116 22:51:16.947723   22544 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 22:51:16.949360   22544 start.go:298] selected driver: kvm2
	I0116 22:51:16.949381   22544 start.go:902] validating driver "kvm2" against &{Name:functional-949292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-949292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:51:16.949533   22544 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 22:51:16.951994   22544 out.go:177] 
	W0116 22:51:16.953354   22544 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 22:51:16.954902   22544 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-949292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-949292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.062441ms)

                                                
                                                
-- stdout --
	* [functional-949292] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 22:51:35.381831   23028 out.go:296] Setting OutFile to fd 1 ...
	I0116 22:51:35.381968   23028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:35.381977   23028 out.go:309] Setting ErrFile to fd 2...
	I0116 22:51:35.381982   23028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 22:51:35.382277   23028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 22:51:35.382829   23028 out.go:303] Setting JSON to false
	I0116 22:51:35.383739   23028 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2042,"bootTime":1705443454,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 22:51:35.383803   23028 start.go:138] virtualization: kvm guest
	I0116 22:51:35.386388   23028 out.go:177] * [functional-949292] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 22:51:35.388345   23028 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 22:51:35.388348   23028 notify.go:220] Checking for updates...
	I0116 22:51:35.390186   23028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 22:51:35.391834   23028 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 22:51:35.393424   23028 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 22:51:35.394984   23028 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 22:51:35.396826   23028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 22:51:35.398849   23028 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 22:51:35.399240   23028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:51:35.399291   23028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:51:35.413251   23028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0116 22:51:35.413740   23028 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:51:35.414312   23028 main.go:141] libmachine: Using API Version  1
	I0116 22:51:35.414355   23028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:51:35.414682   23028 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:51:35.414846   23028 main.go:141] libmachine: (functional-949292) Calling .DriverName
	I0116 22:51:35.415068   23028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 22:51:35.415341   23028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 22:51:35.415378   23028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 22:51:35.429250   23028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I0116 22:51:35.429705   23028 main.go:141] libmachine: () Calling .GetVersion
	I0116 22:51:35.430241   23028 main.go:141] libmachine: Using API Version  1
	I0116 22:51:35.430261   23028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 22:51:35.430628   23028 main.go:141] libmachine: () Calling .GetMachineName
	I0116 22:51:35.430830   23028 main.go:141] libmachine: (functional-949292) Calling .DriverName
	I0116 22:51:35.463031   23028 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0116 22:51:35.464593   23028 start.go:298] selected driver: kvm2
	I0116 22:51:35.464608   23028 start.go:902] validating driver "kvm2" against &{Name:functional-949292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-949292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.78 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 22:51:35.464771   23028 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 22:51:35.467815   23028 out.go:177] 
	W0116 22:51:35.469317   23028 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 22:51:35.470962   23028 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-949292 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-949292 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-h7xm2" [c53b1e01-16bc-4b31-9b75-e132549457a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-h7xm2" [c53b1e01-16bc-4b31-9b75-e132549457a2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005343917s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.78:31825
functional_test.go:1674: http://192.168.39.78:31825: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-h7xm2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.78:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.78:31825
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [39ad1979-16c3-4d5e-a918-09aedb18b321] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007804749s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-949292 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-949292 apply -f testdata/storage-provisioner/pvc.yaml
E0116 22:51:07.616653   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-949292 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-949292 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-949292 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bad45be9-c5e0-477e-990d-b0d4a219e03e] Pending
helpers_test.go:344: "sp-pod" [bad45be9-c5e0-477e-990d-b0d4a219e03e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bad45be9-c5e0-477e-990d-b0d4a219e03e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.00371759s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-949292 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-949292 delete -f testdata/storage-provisioner/pod.yaml
E0116 22:51:28.097375   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-949292 delete -f testdata/storage-provisioner/pod.yaml: (2.392528702s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-949292 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cca6371e-d5dc-4b5e-b3cb-131921bfee40] Pending
helpers_test.go:344: "sp-pod" [cca6371e-d5dc-4b5e-b3cb-131921bfee40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cca6371e-d5dc-4b5e-b3cb-131921bfee40] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005722655s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-949292 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh -n functional-949292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cp functional-949292:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3584711938/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh -n functional-949292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh -n functional-949292 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-949292 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-dr7w4" [9c4eb83a-74ce-4fab-98dd-3438416734ce] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-dr7w4" [9c4eb83a-74ce-4fab-98dd-3438416734ce] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.005009399s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-949292 exec mysql-859648c796-dr7w4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-949292 exec mysql-859648c796-dr7w4 -- mysql -ppassword -e "show databases;": exit status 1 (185.771059ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-949292 exec mysql-859648c796-dr7w4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/14930/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /etc/test/nested/copy/14930/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/14930.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /etc/ssl/certs/14930.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/14930.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /usr/share/ca-certificates/14930.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/149302.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /etc/ssl/certs/149302.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/149302.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /usr/share/ca-certificates/149302.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-949292 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "sudo systemctl is-active docker": exit status 1 (213.659982ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "sudo systemctl is-active containerd": exit status 1 (214.492257ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-949292 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-949292 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-b97rl" [deec4701-6466-41a9-92c2-efb0567e13d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-b97rl" [deec4701-6466-41a9-92c2-efb0567e13d3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.006464149s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949292 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-949292
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-949292
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949292 image ls --format short --alsologtostderr:
I0116 22:51:45.139449   23680 out.go:296] Setting OutFile to fd 1 ...
I0116 22:51:45.139609   23680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.139619   23680 out.go:309] Setting ErrFile to fd 2...
I0116 22:51:45.139627   23680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.139975   23680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
I0116 22:51:45.140828   23680 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.140978   23680 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.141600   23680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.141651   23680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.162634   23680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37513
I0116 22:51:45.163149   23680 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.163882   23680 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.163909   23680 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.164329   23680 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.164557   23680 main.go:141] libmachine: (functional-949292) Calling .GetState
I0116 22:51:45.166838   23680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.166890   23680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.185378   23680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
I0116 22:51:45.185745   23680 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.186245   23680 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.186269   23680 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.186588   23680 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.186769   23680 main.go:141] libmachine: (functional-949292) Calling .DriverName
I0116 22:51:45.187015   23680 ssh_runner.go:195] Run: systemctl --version
I0116 22:51:45.187060   23680 main.go:141] libmachine: (functional-949292) Calling .GetSSHHostname
I0116 22:51:45.190443   23680 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.190906   23680 main.go:141] libmachine: (functional-949292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:35:f6", ip: ""} in network mk-functional-949292: {Iface:virbr1 ExpiryTime:2024-01-16 23:48:06 +0000 UTC Type:0 Mac:52:54:00:f8:35:f6 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:functional-949292 Clientid:01:52:54:00:f8:35:f6}
I0116 22:51:45.190929   23680 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined IP address 192.168.39.78 and MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.191115   23680 main.go:141] libmachine: (functional-949292) Calling .GetSSHPort
I0116 22:51:45.191298   23680 main.go:141] libmachine: (functional-949292) Calling .GetSSHKeyPath
I0116 22:51:45.191442   23680 main.go:141] libmachine: (functional-949292) Calling .GetSSHUsername
I0116 22:51:45.191588   23680 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/functional-949292/id_rsa Username:docker}
I0116 22:51:45.334250   23680 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 22:51:45.477577   23680 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.477598   23680 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.477908   23680 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:45.477914   23680 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.477932   23680 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:45.477949   23680 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.477966   23680 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.478188   23680 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.478206   23680 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949292 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-949292  | cd347ae4ae49e | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-949292  | ffd4cfbbe753e | 34.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949292 image ls --format table --alsologtostderr:
I0116 22:51:45.558567   23779 out.go:296] Setting OutFile to fd 1 ...
I0116 22:51:45.558739   23779 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.558757   23779 out.go:309] Setting ErrFile to fd 2...
I0116 22:51:45.558765   23779 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.559090   23779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
I0116 22:51:45.559973   23779 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.560135   23779 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.560799   23779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.560891   23779 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.577728   23779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
I0116 22:51:45.578219   23779 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.578961   23779 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.578984   23779 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.579392   23779 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.579604   23779 main.go:141] libmachine: (functional-949292) Calling .GetState
I0116 22:51:45.581739   23779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.581812   23779 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.597682   23779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39779
I0116 22:51:45.598103   23779 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.598619   23779 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.598639   23779 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.598950   23779 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.599131   23779 main.go:141] libmachine: (functional-949292) Calling .DriverName
I0116 22:51:45.599359   23779 ssh_runner.go:195] Run: systemctl --version
I0116 22:51:45.599397   23779 main.go:141] libmachine: (functional-949292) Calling .GetSSHHostname
I0116 22:51:45.602769   23779 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.603231   23779 main.go:141] libmachine: (functional-949292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:35:f6", ip: ""} in network mk-functional-949292: {Iface:virbr1 ExpiryTime:2024-01-16 23:48:06 +0000 UTC Type:0 Mac:52:54:00:f8:35:f6 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:functional-949292 Clientid:01:52:54:00:f8:35:f6}
I0116 22:51:45.603306   23779 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined IP address 192.168.39.78 and MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.603565   23779 main.go:141] libmachine: (functional-949292) Calling .GetSSHPort
I0116 22:51:45.603711   23779 main.go:141] libmachine: (functional-949292) Calling .GetSSHKeyPath
I0116 22:51:45.603854   23779 main.go:141] libmachine: (functional-949292) Calling .GetSSHUsername
I0116 22:51:45.603956   23779 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/functional-949292/id_rsa Username:docker}
I0116 22:51:45.740562   23779 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 22:51:45.859970   23779 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.859989   23779 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.860285   23779 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:45.860312   23779 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.860321   23779 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:45.860339   23779 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.860352   23779 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.860629   23779 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.860649   23779 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:45.860691   23779 main.go:141] libmachine: Making call to close connection to plugin binary
2024/01/16 22:51:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949292 image ls --format json --alsologtostderr:
[{"id":"cd347ae4ae49e3385abaae75eca81bc8384806d7a495c7bfe7b260d1525ed37e","repoDigests":["localhost/minikube-local-cache-test@sha256:13b3ace36c44bca0663e3839eef2b36a06451f2ff9f4ad7923588587088159fe"],"repoTags":["localhost/minikube-local-cache-test:functional-949292"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f
37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2
a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k
8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"82
e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2
e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-949292"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949292 image ls --format json --alsologtostderr:
I0116 22:51:45.513160   23767 out.go:296] Setting OutFile to fd 1 ...
I0116 22:51:45.513392   23767 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.513401   23767 out.go:309] Setting ErrFile to fd 2...
I0116 22:51:45.513406   23767 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.513639   23767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
I0116 22:51:45.514314   23767 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.514535   23767 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.515080   23767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.515138   23767 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.531640   23767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
I0116 22:51:45.532198   23767 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.532761   23767 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.532780   23767 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.533219   23767 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.534286   23767 main.go:141] libmachine: (functional-949292) Calling .GetState
I0116 22:51:45.536608   23767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.536659   23767 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.555076   23767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
I0116 22:51:45.555482   23767 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.555997   23767 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.556021   23767 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.556365   23767 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.556560   23767 main.go:141] libmachine: (functional-949292) Calling .DriverName
I0116 22:51:45.556752   23767 ssh_runner.go:195] Run: systemctl --version
I0116 22:51:45.556779   23767 main.go:141] libmachine: (functional-949292) Calling .GetSSHHostname
I0116 22:51:45.560285   23767 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.560737   23767 main.go:141] libmachine: (functional-949292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:35:f6", ip: ""} in network mk-functional-949292: {Iface:virbr1 ExpiryTime:2024-01-16 23:48:06 +0000 UTC Type:0 Mac:52:54:00:f8:35:f6 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:functional-949292 Clientid:01:52:54:00:f8:35:f6}
I0116 22:51:45.560845   23767 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined IP address 192.168.39.78 and MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.560894   23767 main.go:141] libmachine: (functional-949292) Calling .GetSSHPort
I0116 22:51:45.561082   23767 main.go:141] libmachine: (functional-949292) Calling .GetSSHKeyPath
I0116 22:51:45.561211   23767 main.go:141] libmachine: (functional-949292) Calling .GetSSHUsername
I0116 22:51:45.561372   23767 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/functional-949292/id_rsa Username:docker}
I0116 22:51:45.687817   23767 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 22:51:45.789681   23767 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.789701   23767 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.789953   23767 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:45.789972   23767 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.789986   23767 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:45.790004   23767 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.790017   23767 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.790259   23767 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:45.790316   23767 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.790330   23767 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949292 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-949292
size: "34114467"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: cd347ae4ae49e3385abaae75eca81bc8384806d7a495c7bfe7b260d1525ed37e
repoDigests:
- localhost/minikube-local-cache-test@sha256:13b3ace36c44bca0663e3839eef2b36a06451f2ff9f4ad7923588587088159fe
repoTags:
- localhost/minikube-local-cache-test:functional-949292
size: "3345"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949292 image ls --format yaml --alsologtostderr:
I0116 22:51:45.154498   23687 out.go:296] Setting OutFile to fd 1 ...
I0116 22:51:45.154729   23687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.154759   23687 out.go:309] Setting ErrFile to fd 2...
I0116 22:51:45.154775   23687 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.155099   23687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
I0116 22:51:45.156016   23687 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.156229   23687 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.156816   23687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.156904   23687 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.175634   23687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
I0116 22:51:45.176063   23687 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.176771   23687 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.176802   23687 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.177197   23687 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.177557   23687 main.go:141] libmachine: (functional-949292) Calling .GetState
I0116 22:51:45.179587   23687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.179621   23687 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.199018   23687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
I0116 22:51:45.199438   23687 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.200060   23687 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.200084   23687 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.200484   23687 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.200681   23687 main.go:141] libmachine: (functional-949292) Calling .DriverName
I0116 22:51:45.200902   23687 ssh_runner.go:195] Run: systemctl --version
I0116 22:51:45.200933   23687 main.go:141] libmachine: (functional-949292) Calling .GetSSHHostname
I0116 22:51:45.204077   23687 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.204537   23687 main.go:141] libmachine: (functional-949292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:35:f6", ip: ""} in network mk-functional-949292: {Iface:virbr1 ExpiryTime:2024-01-16 23:48:06 +0000 UTC Type:0 Mac:52:54:00:f8:35:f6 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:functional-949292 Clientid:01:52:54:00:f8:35:f6}
I0116 22:51:45.204567   23687 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined IP address 192.168.39.78 and MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.204719   23687 main.go:141] libmachine: (functional-949292) Calling .GetSSHPort
I0116 22:51:45.204864   23687 main.go:141] libmachine: (functional-949292) Calling .GetSSHKeyPath
I0116 22:51:45.204998   23687 main.go:141] libmachine: (functional-949292) Calling .GetSSHUsername
I0116 22:51:45.205128   23687 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/functional-949292/id_rsa Username:docker}
I0116 22:51:45.316811   23687 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 22:51:45.433567   23687 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.433580   23687 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.433895   23687 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.433911   23687 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:45.433928   23687 main.go:141] libmachine: Making call to close driver server
I0116 22:51:45.433942   23687 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:45.434201   23687 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:45.434221   23687 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:45.434254   23687 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh pgrep buildkitd: exit status 1 (319.328605ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image build -t localhost/my-image:functional-949292 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image build -t localhost/my-image:functional-949292 testdata/build --alsologtostderr: (4.892580178s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-949292 image build -t localhost/my-image:functional-949292 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 33bc7234078
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-949292
--> 7f82feb59e6
Successfully tagged localhost/my-image:functional-949292
7f82feb59e6d794f9f3ebc4e4907a044ca88f790ef166f32db711fbd99a945b6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-949292 image build -t localhost/my-image:functional-949292 testdata/build --alsologtostderr:
I0116 22:51:45.567055   23785 out.go:296] Setting OutFile to fd 1 ...
I0116 22:51:45.567288   23785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.567301   23785 out.go:309] Setting ErrFile to fd 2...
I0116 22:51:45.567308   23785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 22:51:45.567609   23785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
I0116 22:51:45.568417   23785 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.569062   23785 config.go:182] Loaded profile config "functional-949292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 22:51:45.569639   23785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.569721   23785 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.585032   23785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
I0116 22:51:45.585453   23785 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.586114   23785 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.586132   23785 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.586541   23785 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.586760   23785 main.go:141] libmachine: (functional-949292) Calling .GetState
I0116 22:51:45.588917   23785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 22:51:45.589001   23785 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 22:51:45.605504   23785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38057
I0116 22:51:45.605821   23785 main.go:141] libmachine: () Calling .GetVersion
I0116 22:51:45.606381   23785 main.go:141] libmachine: Using API Version  1
I0116 22:51:45.606401   23785 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 22:51:45.606699   23785 main.go:141] libmachine: () Calling .GetMachineName
I0116 22:51:45.607028   23785 main.go:141] libmachine: (functional-949292) Calling .DriverName
I0116 22:51:45.607216   23785 ssh_runner.go:195] Run: systemctl --version
I0116 22:51:45.607237   23785 main.go:141] libmachine: (functional-949292) Calling .GetSSHHostname
I0116 22:51:45.609913   23785 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.610291   23785 main.go:141] libmachine: (functional-949292) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:35:f6", ip: ""} in network mk-functional-949292: {Iface:virbr1 ExpiryTime:2024-01-16 23:48:06 +0000 UTC Type:0 Mac:52:54:00:f8:35:f6 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:functional-949292 Clientid:01:52:54:00:f8:35:f6}
I0116 22:51:45.610353   23785 main.go:141] libmachine: (functional-949292) DBG | domain functional-949292 has defined IP address 192.168.39.78 and MAC address 52:54:00:f8:35:f6 in network mk-functional-949292
I0116 22:51:45.610514   23785 main.go:141] libmachine: (functional-949292) Calling .GetSSHPort
I0116 22:51:45.610694   23785 main.go:141] libmachine: (functional-949292) Calling .GetSSHKeyPath
I0116 22:51:45.610831   23785 main.go:141] libmachine: (functional-949292) Calling .GetSSHUsername
I0116 22:51:45.611008   23785 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/functional-949292/id_rsa Username:docker}
I0116 22:51:45.738950   23785 build_images.go:151] Building image from path: /tmp/build.3834336080.tar
I0116 22:51:45.739033   23785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 22:51:45.776614   23785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3834336080.tar
I0116 22:51:45.807910   23785 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3834336080.tar: stat -c "%s %y" /var/lib/minikube/build/build.3834336080.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3834336080.tar': No such file or directory
I0116 22:51:45.807956   23785 ssh_runner.go:362] scp /tmp/build.3834336080.tar --> /var/lib/minikube/build/build.3834336080.tar (3072 bytes)
I0116 22:51:45.889830   23785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3834336080
I0116 22:51:45.907148   23785 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3834336080 -xf /var/lib/minikube/build/build.3834336080.tar
I0116 22:51:45.929466   23785 crio.go:297] Building image: /var/lib/minikube/build/build.3834336080
I0116 22:51:45.929524   23785 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-949292 /var/lib/minikube/build/build.3834336080 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0116 22:51:50.331683   23785 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-949292 /var/lib/minikube/build/build.3834336080 --cgroup-manager=cgroupfs: (4.402120953s)
I0116 22:51:50.331752   23785 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3834336080
I0116 22:51:50.355903   23785 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3834336080.tar
I0116 22:51:50.379622   23785 build_images.go:207] Built localhost/my-image:functional-949292 from /tmp/build.3834336080.tar
I0116 22:51:50.379667   23785 build_images.go:123] succeeded building to: functional-949292
I0116 22:51:50.379673   23785 build_images.go:124] failed building to: 
I0116 22:51:50.379726   23785 main.go:141] libmachine: Making call to close driver server
I0116 22:51:50.379746   23785 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:50.380034   23785 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:50.380055   23785 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 22:51:50.380066   23785 main.go:141] libmachine: Making call to close driver server
I0116 22:51:50.380076   23785 main.go:141] libmachine: (functional-949292) Calling .Close
I0116 22:51:50.380308   23785 main.go:141] libmachine: Successfully made call to close driver server
I0116 22:51:50.380320   23785 main.go:141] libmachine: (functional-949292) DBG | Closing plugin on server side
I0116 22:51:50.380326   23785 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.954801667s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-949292
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr: (3.66412982s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr: (2.283273663s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.876162433s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-949292
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image load --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr: (7.876037997s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service list -o json
functional_test.go:1493: Took "264.823623ms" to run "out/minikube-linux-amd64 -p functional-949292 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.78:32514
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.78:32514
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "323.37775ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "70.083203ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "409.962939ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "62.711538ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdany-port3814613723/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705445477100449036" to /tmp/TestFunctionalparallelMountCmdany-port3814613723/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705445477100449036" to /tmp/TestFunctionalparallelMountCmdany-port3814613723/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705445477100449036" to /tmp/TestFunctionalparallelMountCmdany-port3814613723/001/test-1705445477100449036
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.046062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 22:51 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 22:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 22:51 test-1705445477100449036
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh cat /mount-9p/test-1705445477100449036
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-949292 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [730d08d3-6311-4f6b-93ae-f82ace140999] Pending
helpers_test.go:344: "busybox-mount" [730d08d3-6311-4f6b-93ae-f82ace140999] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [730d08d3-6311-4f6b-93ae-f82ace140999] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [730d08d3-6311-4f6b-93ae-f82ace140999] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.004727025s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-949292 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdany-port3814613723/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image save gcr.io/google-containers/addon-resizer:functional-949292 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image save gcr.io/google-containers/addon-resizer:functional-949292 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.321027649s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image rm gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image rm gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr: (1.056802581s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.137987334s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-949292
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 image save --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-949292 image save --daemon gcr.io/google-containers/addon-resizer:functional-949292 --alsologtostderr: (6.37769525s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-949292
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdspecific-port4148809500/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.090953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdspecific-port4148809500/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "sudo umount -f /mount-9p": exit status 1 (266.954434ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-949292 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdspecific-port4148809500/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T" /mount1: exit status 1 (290.823579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-949292 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-949292 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-949292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup62289122/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-949292
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-949292
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-949292
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-264702 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0116 22:52:09.057817   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-264702 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.489043941s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons enable ingress --alsologtostderr -v=5: (16.397189165s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-264702 addons enable ingress-dns --alsologtostderr -v=5
E0116 22:53:30.978377   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-565554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0116 22:56:41.930152   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:57:22.891291   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-565554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.466108352s)
--- PASS: TestJSONOutput/start/Command (95.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-565554 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-565554 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-565554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-565554 --output=json --user=testUser: (7.11838572s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-773888 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-773888 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.100248ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d0ecfa8-c636-4537-a6c6-ab3244b64bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-773888] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76684980-681c-46ef-a3f6-7383520d6a7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17975"}}
	{"specversion":"1.0","id":"be1a4d82-5440-4a48-8f6c-a2e64728ee5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58ed2c52-3649-46a5-a613-8c0833c2a798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig"}}
	{"specversion":"1.0","id":"4966f8ca-13e4-4d39-b962-9627dd262a62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube"}}
	{"specversion":"1.0","id":"07f32caf-0c5c-4e0f-a846-114b5c546f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0b5830dd-8499-4075-8296-16b3228bf835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"224decf3-5225-4032-a78e-20728017656a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-773888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-773888
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-894033 --driver=kvm2  --container-runtime=crio
E0116 22:58:31.442688   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.447984   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.458245   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.478540   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.518806   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.599157   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:31.759600   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:32.080199   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:32.720826   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:34.001591   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:36.562723   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:41.682995   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 22:58:44.811569   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 22:58:51.923998   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-894033 --driver=kvm2  --container-runtime=crio: (46.011968753s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-896639 --driver=kvm2  --container-runtime=crio
E0116 22:59:12.405043   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-896639 --driver=kvm2  --container-runtime=crio: (44.859457177s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-894033
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-896639
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-896639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-896639
helpers_test.go:175: Cleaning up "first-894033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-894033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-894033: (1.000547721s)
--- PASS: TestMinikubeProfile (93.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-435005 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0116 22:59:53.365678   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-435005 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.471020781s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-435005 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-435005 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-449854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-449854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.905677402s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-435005 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- mount | grep 9p
E0116 23:00:47.136751   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-449854
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-449854: (1.09099836s)
--- PASS: TestMountStart/serial/Stop (1.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-449854
E0116 23:01:00.968017   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-449854: (21.856281422s)
--- PASS: TestMountStart/serial/RestartStopped (22.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-449854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (160.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328490 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 23:01:15.286601   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:01:28.652314   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:03:31.442274   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328490 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m39.720117161s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (160.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-328490 -- rollout status deployment/busybox: (3.860451467s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-b7wdd -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-dcshd -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-b7wdd -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-dcshd -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-b7wdd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-dcshd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-b7wdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-b7wdd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-dcshd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0116 23:03:59.127253   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328490 -- exec busybox-5b5d89c9d6-dcshd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328490 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-328490 -v 3 --alsologtostderr: (42.983547195s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-328490 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp testdata/cp-test.txt multinode-328490:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870702025/001/cp-test_multinode-328490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490:/home/docker/cp-test.txt multinode-328490-m02:/home/docker/cp-test_multinode-328490_multinode-328490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test_multinode-328490_multinode-328490-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490:/home/docker/cp-test.txt multinode-328490-m03:/home/docker/cp-test_multinode-328490_multinode-328490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test_multinode-328490_multinode-328490-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp testdata/cp-test.txt multinode-328490-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870702025/001/cp-test_multinode-328490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt multinode-328490:/home/docker/cp-test_multinode-328490-m02_multinode-328490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test_multinode-328490-m02_multinode-328490.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m02:/home/docker/cp-test.txt multinode-328490-m03:/home/docker/cp-test_multinode-328490-m02_multinode-328490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test_multinode-328490-m02_multinode-328490-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp testdata/cp-test.txt multinode-328490-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile870702025/001/cp-test_multinode-328490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt multinode-328490:/home/docker/cp-test_multinode-328490-m03_multinode-328490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490 "sudo cat /home/docker/cp-test_multinode-328490-m03_multinode-328490.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 cp multinode-328490-m03:/home/docker/cp-test.txt multinode-328490-m02:/home/docker/cp-test_multinode-328490-m03_multinode-328490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 ssh -n multinode-328490-m02 "sudo cat /home/docker/cp-test_multinode-328490-m03_multinode-328490-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-328490 node stop m03: (1.39904051s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328490 status: exit status 7 (435.882473ms)

                                                
                                                
-- stdout --
	multinode-328490
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328490-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328490-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr: exit status 7 (429.102107ms)

                                                
                                                
-- stdout --
	multinode-328490
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328490-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328490-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:04:52.646850   30796 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:04:52.647117   30796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:04:52.647127   30796 out.go:309] Setting ErrFile to fd 2...
	I0116 23:04:52.647132   30796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:04:52.647326   30796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:04:52.647474   30796 out.go:303] Setting JSON to false
	I0116 23:04:52.647503   30796 mustload.go:65] Loading cluster: multinode-328490
	I0116 23:04:52.647614   30796 notify.go:220] Checking for updates...
	I0116 23:04:52.647850   30796 config.go:182] Loaded profile config "multinode-328490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:04:52.647862   30796 status.go:255] checking status of multinode-328490 ...
	I0116 23:04:52.648241   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.648295   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.666025   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0116 23:04:52.666433   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.666992   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.667018   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.667419   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.667636   30796 main.go:141] libmachine: (multinode-328490) Calling .GetState
	I0116 23:04:52.669130   30796 status.go:330] multinode-328490 host status = "Running" (err=<nil>)
	I0116 23:04:52.669148   30796 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:04:52.669430   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.669464   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.684307   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0116 23:04:52.684663   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.685057   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.685076   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.685353   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.685518   30796 main.go:141] libmachine: (multinode-328490) Calling .GetIP
	I0116 23:04:52.688190   30796 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:04:52.688562   30796 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:01:27 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:04:52.688614   30796 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:04:52.688674   30796 host.go:66] Checking if "multinode-328490" exists ...
	I0116 23:04:52.689039   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.689104   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.702994   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0116 23:04:52.703352   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.703729   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.703746   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.704037   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.704176   30796 main.go:141] libmachine: (multinode-328490) Calling .DriverName
	I0116 23:04:52.704331   30796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 23:04:52.704355   30796 main.go:141] libmachine: (multinode-328490) Calling .GetSSHHostname
	I0116 23:04:52.706719   30796 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:04:52.707073   30796 main.go:141] libmachine: (multinode-328490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:25:4f", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:01:27 +0000 UTC Type:0 Mac:52:54:00:b2:25:4f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-328490 Clientid:01:52:54:00:b2:25:4f}
	I0116 23:04:52.707116   30796 main.go:141] libmachine: (multinode-328490) DBG | domain multinode-328490 has defined IP address 192.168.39.50 and MAC address 52:54:00:b2:25:4f in network mk-multinode-328490
	I0116 23:04:52.707191   30796 main.go:141] libmachine: (multinode-328490) Calling .GetSSHPort
	I0116 23:04:52.707346   30796 main.go:141] libmachine: (multinode-328490) Calling .GetSSHKeyPath
	I0116 23:04:52.707487   30796 main.go:141] libmachine: (multinode-328490) Calling .GetSSHUsername
	I0116 23:04:52.707639   30796 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490/id_rsa Username:docker}
	I0116 23:04:52.803143   30796 ssh_runner.go:195] Run: systemctl --version
	I0116 23:04:52.808485   30796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:04:52.822211   30796 kubeconfig.go:92] found "multinode-328490" server: "https://192.168.39.50:8443"
	I0116 23:04:52.822240   30796 api_server.go:166] Checking apiserver status ...
	I0116 23:04:52.822270   30796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 23:04:52.833157   30796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	I0116 23:04:52.841957   30796 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod8fca46a478051a968c54a441a292fd23/crio-5c3b9ee54ffced884d76d1c337b59f8b7a429f870a52dbbea27a4633d338c172"
	I0116 23:04:52.842008   30796 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8fca46a478051a968c54a441a292fd23/crio-5c3b9ee54ffced884d76d1c337b59f8b7a429f870a52dbbea27a4633d338c172/freezer.state
	I0116 23:04:52.850499   30796 api_server.go:204] freezer state: "THAWED"
	I0116 23:04:52.850523   30796 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0116 23:04:52.855347   30796 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0116 23:04:52.855366   30796 status.go:421] multinode-328490 apiserver status = Running (err=<nil>)
	I0116 23:04:52.855374   30796 status.go:257] multinode-328490 status: &{Name:multinode-328490 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 23:04:52.855387   30796 status.go:255] checking status of multinode-328490-m02 ...
	I0116 23:04:52.855661   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.855692   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.870224   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0116 23:04:52.870614   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.871038   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.871057   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.871345   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.871570   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetState
	I0116 23:04:52.872991   30796 status.go:330] multinode-328490-m02 host status = "Running" (err=<nil>)
	I0116 23:04:52.873008   30796 host.go:66] Checking if "multinode-328490-m02" exists ...
	I0116 23:04:52.873372   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.873418   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.887025   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0116 23:04:52.887373   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.887891   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.887911   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.888191   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.888381   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetIP
	I0116 23:04:52.891232   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:04:52.891670   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:04:52.891697   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:04:52.891826   30796 host.go:66] Checking if "multinode-328490-m02" exists ...
	I0116 23:04:52.892092   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:52.892129   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:52.906293   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0116 23:04:52.906653   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:52.907077   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:52.907098   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:52.907351   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:52.907539   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .DriverName
	I0116 23:04:52.907746   30796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 23:04:52.907766   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHHostname
	I0116 23:04:52.910315   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:04:52.910753   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:27:93", ip: ""} in network mk-multinode-328490: {Iface:virbr1 ExpiryTime:2024-01-17 00:02:32 +0000 UTC Type:0 Mac:52:54:00:2e:27:93 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-328490-m02 Clientid:01:52:54:00:2e:27:93}
	I0116 23:04:52.910789   30796 main.go:141] libmachine: (multinode-328490-m02) DBG | domain multinode-328490-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:2e:27:93 in network mk-multinode-328490
	I0116 23:04:52.910938   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHPort
	I0116 23:04:52.911109   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHKeyPath
	I0116 23:04:52.911275   30796 main.go:141] libmachine: (multinode-328490-m02) Calling .GetSSHUsername
	I0116 23:04:52.911409   30796 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17975-6238/.minikube/machines/multinode-328490-m02/id_rsa Username:docker}
	I0116 23:04:52.992942   30796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 23:04:53.004350   30796 status.go:257] multinode-328490-m02 status: &{Name:multinode-328490-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 23:04:53.004390   30796 status.go:255] checking status of multinode-328490-m03 ...
	I0116 23:04:53.004801   30796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 23:04:53.004848   30796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 23:04:53.019263   30796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0116 23:04:53.019650   30796 main.go:141] libmachine: () Calling .GetVersion
	I0116 23:04:53.020082   30796 main.go:141] libmachine: Using API Version  1
	I0116 23:04:53.020106   30796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 23:04:53.020411   30796 main.go:141] libmachine: () Calling .GetMachineName
	I0116 23:04:53.020600   30796 main.go:141] libmachine: (multinode-328490-m03) Calling .GetState
	I0116 23:04:53.022063   30796 status.go:330] multinode-328490-m03 host status = "Stopped" (err=<nil>)
	I0116 23:04:53.022084   30796 status.go:343] host is not running, skipping remaining checks
	I0116 23:04:53.022088   30796 status.go:257] multinode-328490-m03 status: &{Name:multinode-328490-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-328490 node start m03 --alsologtostderr: (28.878420021s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-328490 node delete m03: (1.016408568s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (440.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328490 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 23:20:47.136829   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:21:00.967757   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:23:31.442208   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:23:50.183235   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:25:47.136744   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:26:00.968214   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328490 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m19.646683003s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328490 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (440.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328490
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328490-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-328490-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.400453ms)

                                                
                                                
-- stdout --
	* [multinode-328490-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-328490-m02' is duplicated with machine name 'multinode-328490-m02' in profile 'multinode-328490'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328490-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328490-m03 --driver=kvm2  --container-runtime=crio: (45.845671542s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328490
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-328490: exit status 80 (227.905512ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-328490
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-328490-m03 already exists in multinode-328490-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-328490-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.19s)

                                                
                                    
x
+
TestScheduledStopUnix (116.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-681780 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-681780 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.369481816s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681780 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-681780 -n scheduled-stop-681780
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681780 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681780 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681780 -n scheduled-stop-681780
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-681780
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681780 --schedule 15s
E0116 23:33:31.443082   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-681780
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-681780: exit status 7 (76.259885ms)

                                                
                                                
-- stdout --
	scheduled-stop-681780
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681780 -n scheduled-stop-681780
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681780 -n scheduled-stop-681780: exit status 7 (81.437616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-681780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-681780
--- PASS: TestScheduledStopUnix (116.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (222.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.696335758 start -p running-upgrade-769920 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.696335758 start -p running-upgrade-769920 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.780989607s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-769920 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-769920 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.334298173s)
helpers_test.go:175: Cleaning up "running-upgrade-769920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-769920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-769920: (1.203565508s)
--- PASS: TestRunningBinaryUpgrade (222.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (178.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.791515474s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-264001
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-264001: (2.509687032s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-264001 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-264001 status --format={{.Host}}: exit status 7 (83.977282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.195537324s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-264001 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (169.083491ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-264001] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-264001
	    minikube start -p kubernetes-upgrade-264001 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2640012 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-264001 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0116 23:40:30.184217   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-264001 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.025643747s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-264001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-264001
--- PASS: TestKubernetesUpgrade (178.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.137262ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-727469] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-727469 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-727469 --driver=kvm2  --container-runtime=crio: (1m14.259265476s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-727469 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-097488 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-097488 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.735638ms)

                                                
                                                
-- stdout --
	* [false-097488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 23:34:19.055625   39094 out.go:296] Setting OutFile to fd 1 ...
	I0116 23:34:19.055866   39094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:34:19.055875   39094 out.go:309] Setting ErrFile to fd 2...
	I0116 23:34:19.055880   39094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 23:34:19.056072   39094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17975-6238/.minikube/bin
	I0116 23:34:19.056622   39094 out.go:303] Setting JSON to false
	I0116 23:34:19.057540   39094 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4605,"bootTime":1705443454,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 23:34:19.057606   39094 start.go:138] virtualization: kvm guest
	I0116 23:34:19.059752   39094 out.go:177] * [false-097488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 23:34:19.061453   39094 out.go:177]   - MINIKUBE_LOCATION=17975
	I0116 23:34:19.061451   39094 notify.go:220] Checking for updates...
	I0116 23:34:19.062823   39094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 23:34:19.064118   39094 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17975-6238/kubeconfig
	I0116 23:34:19.065403   39094 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17975-6238/.minikube
	I0116 23:34:19.066726   39094 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 23:34:19.068170   39094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 23:34:19.070128   39094 config.go:182] Loaded profile config "NoKubernetes-727469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:34:19.070261   39094 config.go:182] Loaded profile config "offline-crio-689316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 23:34:19.070404   39094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 23:34:19.105454   39094 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 23:34:19.106954   39094 start.go:298] selected driver: kvm2
	I0116 23:34:19.106966   39094 start.go:902] validating driver "kvm2" against <nil>
	I0116 23:34:19.106977   39094 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 23:34:19.109038   39094 out.go:177] 
	W0116 23:34:19.110295   39094 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 23:34:19.111354   39094 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-097488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-097488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-097488"

                                                
                                                
----------------------- debugLogs end: false-097488 [took: 3.239860311s] --------------------------------
helpers_test.go:175: Cleaning up "false-097488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-097488
--- PASS: TestNetworkPlugins/group/false (3.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0116 23:35:47.136931   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.651967142s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-727469 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-727469 status -o json: exit status 2 (266.315862ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-727469","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-727469
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-727469: (1.142104373s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-727469 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.903978376s)
--- PASS: TestNoKubernetes/serial/Start (46.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-727469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-727469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.302421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.223301972s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.644824608s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-727469
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-727469: (2.583009007s)
--- PASS: TestNoKubernetes/serial/Stop (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-727469 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-727469 --driver=kvm2  --container-runtime=crio: (21.03470812s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.03s)

                                                
                                    
x
+
TestPause/serial/Start (116.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928961 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-928961 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m56.796447189s)
--- PASS: TestPause/serial/Start (116.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-727469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-727469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.907642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2158976067 start -p stopped-upgrade-266610 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0116 23:38:31.442697   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2158976067 start -p stopped-upgrade-266610 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.242567484s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2158976067 -p stopped-upgrade-266610 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2158976067 -p stopped-upgrade-266610 stop: (2.126039235s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-266610 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-266610 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.712776754s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928961 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-928961 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.257595539s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.29s)

                                                
                                    
x
+
TestPause/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-928961 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-928961 --alsologtostderr -v=5: (2.395678785s)
--- PASS: TestPause/serial/Pause (2.40s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-928961 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-928961 --output=json --layout=cluster: exit status 2 (269.975163ms)

                                                
                                                
-- stdout --
	{"Name":"pause-928961","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-928961","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.12s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-928961 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-928961 --alsologtostderr -v=5: (1.124143687s)
--- PASS: TestPause/serial/Unpause (1.12s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-928961 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-928961 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-928961 --alsologtostderr -v=5: (1.036039943s)
--- PASS: TestPause/serial/DeletePaused (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0116 23:40:47.136166   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m45.748105275s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-266610
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-266610: (1.128674266s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m21.883452217s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (132.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m12.415687213s)
--- PASS: TestNetworkPlugins/group/calico/Start (132.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (122.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m2.300694471s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (122.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lljr6" [11b6988b-a785-4183-bdfe-bc1277dff4f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005267096s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zlbxv" [444fa217-e226-4594-9d5b-49f428da8b6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zlbxv" [444fa217-e226-4594-9d5b-49f428da8b6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005227874s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-99m2q" [797ec74a-324e-4777-a87a-b7516e17ef05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-99m2q" [797ec74a-324e-4777-a87a-b7516e17ef05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.00613559s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.07952377s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m47.243762042s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lw4lr" [aea09ca6-165f-4e69-96d4-56f1c577382e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00649318s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cd5pd" [a7e69b25-fc96-45b1-8b2f-dc455ffe1f45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 23:43:31.442620   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cd5pd" [a7e69b25-fc96-45b1-8b2f-dc455ffe1f45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.005576442s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cd74f" [e6c1677b-0cf3-4723-a7d1-ed670c53095e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cd74f" [e6c1677b-0cf3-4723-a7d1-ed670c53095e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005079742s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-097488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m7.829715633s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-771669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-771669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m29.784774754s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qqjlc" [3ad37525-56f0-4360-9f6a-8f1be8736f32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qqjlc" [3ad37525-56f0-4360-9f6a-8f1be8736f32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004134157s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9wtmb" [e0d6d24e-91cd-4b34-845b-3aac79eccd93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007370705s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nq9gs" [ca6ad84e-930d-4c91-a81d-6e601973ba5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nq9gs" [ca6ad84e-930d-4c91-a81d-6e601973ba5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004775599s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-097488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-097488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zgnft" [869a8a6b-199a-416c-8705-6a460580affc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zgnft" [869a8a6b-199a-416c-8705-6a460580affc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005297925s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (122.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-085322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-085322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m2.508604281s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (122.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-097488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-097488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (110.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-837871 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-837871 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m50.913571052s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (110.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-967325 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 23:45:44.014482   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:45:47.136908   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:46:00.967678   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-967325 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m13.262175826s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-771669 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9ad6da47-bf2c-47f4-bbc9-3cf1a88d081c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004617105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-771669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-771669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-771669 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-085322 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1680c487-b710-4a5a-8067-25277e4b4735] Pending
helpers_test.go:344: "busybox" [1680c487-b710-4a5a-8067-25277e4b4735] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1680c487-b710-4a5a-8067-25277e4b4735] Running
E0116 23:47:23.604237   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.609498   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.619745   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.640025   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.680360   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.761456   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:23.921845   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:24.242423   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004480696s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-085322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-085322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0116 23:47:24.882604   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-085322 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-837871 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7d76925-539a-4a46-9f8b-031294bfef9c] Pending
helpers_test.go:344: "busybox" [e7d76925-539a-4a46-9f8b-031294bfef9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0116 23:47:28.724302   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e7d76925-539a-4a46-9f8b-031294bfef9c] Running
E0116 23:47:32.960433   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:32.965784   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:32.976096   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:32.996490   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:33.036858   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:33.117273   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:33.277731   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:33.598402   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:47:33.845493   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:47:34.239468   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003679133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-837871 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-837871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0116 23:47:35.519654   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-837871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020819973s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-837871 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d37b0631-de59-490c-8d46-839f3af16dde] Pending
helpers_test.go:344: "busybox" [d37b0631-de59-490c-8d46-839f3af16dde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d37b0631-de59-490c-8d46-839f3af16dde] Running
E0116 23:48:04.567038   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004482075s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-967325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-967325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053598454s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-967325 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (398.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-771669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-771669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m38.279586664s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-771669 -n old-k8s-version-771669
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (398.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (562.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-085322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-085322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m22.303181146s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-085322 -n no-preload-085322
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (562.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (842.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-837871 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 23:50:10.015038   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.020357   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.030620   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.050886   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.091264   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.171620   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.332034   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:10.652671   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:11.293104   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:12.574281   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:15.135579   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:15.771795   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:50:16.804423   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:50:20.256325   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:50:22.132728   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-837871 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m1.902424846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-837871 -n embed-certs-837871
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (842.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (835.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-967325 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 23:50:47.136071   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:50:50.977882   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:51:00.968099   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
E0116 23:51:03.092966   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:51:03.466230   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:51:17.213388   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:51:22.084495   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:51:31.938532   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:52:23.603559   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:52:25.014373   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:52:32.960321   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:52:39.134380   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:52:51.288694   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/kindnet-097488/client.crt: no such file or directory
E0116 23:52:53.858815   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:53:00.645219   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/auto-097488/client.crt: no such file or directory
E0116 23:53:19.621729   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:53:31.443211   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/ingress-addon-legacy-264702/client.crt: no such file or directory
E0116 23:53:38.240984   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:53:47.306835   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/calico-097488/client.crt: no such file or directory
E0116 23:54:05.925189   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/custom-flannel-097488/client.crt: no such file or directory
E0116 23:54:41.172267   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:54:55.289402   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:55:08.855058   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0116 23:55:10.014478   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:55:22.974881   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0116 23:55:37.699888   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
E0116 23:55:47.136237   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/addons-033244/client.crt: no such file or directory
E0116 23:56:00.968284   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/functional-949292/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-967325 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (13m55.178625997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967325 -n default-k8s-diff-port-967325
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (835.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-771669 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-353558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0117 00:14:41.172036   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/enable-default-cni-097488/client.crt: no such file or directory
E0117 00:14:55.289468   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/flannel-097488/client.crt: no such file or directory
E0117 00:15:10.015111   14930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17975-6238/.minikube/profiles/bridge-097488/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-353558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.323306097s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-353558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-353558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.324291895s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-353558 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-353558 --alsologtostderr -v=3: (3.11620878s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353558 -n newest-cni-353558
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353558 -n newest-cni-353558: exit status 7 (85.128547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-353558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-353558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-353558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (46.137922524s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-353558 -n newest-cni-353558
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-353558 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-353558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353558 -n newest-cni-353558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353558 -n newest-cni-353558: exit status 2 (247.08847ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353558 -n newest-cni-353558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353558 -n newest-cni-353558: exit status 2 (243.63238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-353558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-353558 -n newest-cni-353558
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-353558 -n newest-cni-353558
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    

Test skip (39/312)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
246 TestNetworkPlugins/group/kubenet 3.43
256 TestNetworkPlugins/group/cilium 3.89
268 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-097488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-097488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-097488"

                                                
                                                
----------------------- debugLogs end: kubenet-097488 [took: 3.270101458s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-097488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-097488
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-097488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-097488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-097488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-097488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-097488"

                                                
                                                
----------------------- debugLogs end: cilium-097488 [took: 3.732273438s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-097488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-097488
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-123117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-123117
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard